Results 1  10
of
33,480
A general method for constructing pseudoGaussian tests
 J. Japan Statist. Soc
, 2008
"... A general method for constructing pseudoGaussian tests—reducing to traditional Gaussian tests under Gaussian densities but remaining valid under nonGaussian ones—is proposed. This method provides a solution to several open problems in classical multivariate analysis. One of them is the test of t ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
A general method for constructing pseudoGaussian tests—reducing to traditional Gaussian tests under Gaussian densities but remaining valid under nonGaussian ones—is proposed. This method provides a solution to several open problems in classical multivariate analysis. One of them is the test
PseudoGaussian tests for common principal components
, 2008
"... The socalled Common Principal Components (CPC) Model, in which the covariance matrices Σi of m populations are assumed to have identical eigenvectors, was introduced by Flury (1984). While Gaussian parametric inference methods (Gaussian maximum likelihood estimation; Gaussian likelihood ratio testi ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
testing) are fully developed for this model, relatively little has been done to extend their validity beyond the Gaussian case. In this paper, we show how Flury (1984)’s Gaussian likelihood ratio test (LRT) for the hypothesis of CPC can be modified into a pseudoGaussian test which remains valid under
Davy PAINDAVEINEA general Method for Constructing PseudoGaussian Tests
, 2007
"... A general method for constructing pseudoGaussian tests—reducing to traditional Gaussian tests under Gaussian densities but remaining valid under nonGaussian ones—is proposed. This method provides a solution to several open problems in classical multivariate analysis. One of them is the test of hom ..."
Abstract
 Add to MetaCart
A general method for constructing pseudoGaussian tests—reducing to traditional Gaussian tests under Gaussian densities but remaining valid under nonGaussian ones—is proposed. This method provides a solution to several open problems in classical multivariate analysis. One of them is the test
Mixtures of Probabilistic Principal Component Analysers
, 1998
"... Principal component analysis (PCA) is one of the most popular techniques for processing, compressing and visualising data, although its effectiveness is limited by its global linearity. While nonlinear variants of PCA have been proposed, an alternative paradigm is to capture data complexity by a com ..."
Abstract

Cited by 537 (6 self)
 Add to MetaCart
maximumlikelihood framework, based on a specific form of Gaussian latent variable model. This leads to a welldefined mixture model for probabilistic principal component analysers, whose parameters can be determined using an EM algorithm. We discuss the advantages of this model in the context
On the statistical analysis of dirty pictures
 JOURNAL OF THE ROYAL STATISTICAL SOCIETY B
, 1986
"... ..."
Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
, 2004
"... Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear m ..."
Abstract

Cited by 1513 (20 self)
 Add to MetaCart
as the class F of those elements whose entries obey the power decay law f  (n) ≤ C · n −1/p. We take measurements 〈f, Xk〉, k = 1,..., K, where the Xk are Ndimensional Gaussian
Estimating the Support of a HighDimensional Distribution
, 1999
"... Suppose you are given some dataset drawn from an underlying probability distribution P and you want to estimate a "simple" subset S of input space such that the probability that a test point drawn from P lies outside of S is bounded by some a priori specified between 0 and 1. We propo ..."
Abstract

Cited by 766 (29 self)
 Add to MetaCart
Suppose you are given some dataset drawn from an underlying probability distribution P and you want to estimate a "simple" subset S of input space such that the probability that a test point drawn from P lies outside of S is bounded by some a priori specified between 0 and 1. We propose a method to approach this problem by trying to estimate a function f which is positive on S and negative on the complement. The functional form of f is given by a kernel expansion in terms of a potentially small subset of the training data; it is regularized by controlling the length of the weight vector in an associated feature space. The expansion coefficients are found by solving a quadratic programming problem, which we do by carrying out sequential optimization over pairs of input patterns. We also provide a preliminary theoretical analysis of the statistical performance of our algorithm. The algorithm is a natural extension of the support vector algorithm to the case of unlabelled d...
Iterative point matching for registration of freeform curves and surfaces
, 1994
"... A heuristic method has been developed for registering two sets of 3D curves obtained by using an edgebased stereo system, or two dense 3D maps obtained by using a correlationbased stereo system. Geometric matching in general is a difficult unsolved problem in computer vision. Fortunately, in ma ..."
Abstract

Cited by 659 (7 self)
 Add to MetaCart
A heuristic method has been developed for registering two sets of 3D curves obtained by using an edgebased stereo system, or two dense 3D maps obtained by using a correlationbased stereo system. Geometric matching in general is a difficult unsolved problem in computer vision. Fortunately, in many practical applications, some a priori knowledge exists which considerably simplifies the problem. In visual navigation, for example, the motion between successive positions is usually approximately known. From this initial estimate, our algorithm computes observer motion with very good precision, which is required for environment modeling (e.g., building a Digital Elevation Map). Objects are represented by a set of 3D points, which are considered as the samples of a surface. No constraint is imposed on the form of the objects. The proposed algorithm is based on iteratively matching points in one set to the closest points in the other. A statistical method based on the distance distribution is used to deal with outliers, occlusion, appearance and disappearance, which allows us to do subsetsubset matching. A leastsquares technique is used to estimate 3D motion from the point correspondences, which reduces the average distance between points in the two sets. Both synthetic and real data have been used to test the algorithm, and the results show that it is efficient and robust, and yields an accurate motion estimate.
Iterative decoding of binary block and convolutional codes
 IEEE Trans. Inform. Theory
, 1996
"... Abstract Iterative decoding of twodimensional systematic convolutional codes has been termed “turbo ” (de)coding. Using loglikelihood algebra, we show that any decoder can he used which accepts soft inputsincluding a priori valuesand delivers soft outputs that can he split into three terms: the ..."
Abstract

Cited by 600 (43 self)
 Add to MetaCart
Abstract Iterative decoding of twodimensional systematic convolutional codes has been termed “turbo ” (de)coding. Using loglikelihood algebra, we show that any decoder can he used which accepts soft inputsincluding a priori valuesand delivers soft outputs that can he split into three terms: the soft channel and a priori inputs, and the extrinsic value. The extrinsic value is used as an a priori value for the next iteration. Decoding algorithms in the loglikelihood domain are given not only for convolutional codes hut also for any linear binary systematic block code. The iteration is controlled by a stop criterion derived from cross entropy, which results in a minimal number of iterations. Optimal and suboptimal decoders with reduced complexity are presented. Simulation results show that very simple component codes are sufficient, block codes are appropriate for high rates and convolutional codes for lower rates less than 213. Any combination of block and convolutional component codes is possible. Several interleaving techniques are described. At a bit error rate (BER) of lo * the performance is slightly above or around the bounds given by the cutoff rate for reasonably simple block/convolutional component codes, interleaver sizes less than 1000 and for three to six iterations. Index Terms Concatenated codes, product codes, iterative decoding, “softinlsoftout ” decoder, “turbo ” (de)coding.
Shiftable Multiscale Transforms
, 1992
"... Orthogonal wavelet transforms have recently become a popular representation for multiscale signal and image analysis. One of the major drawbacks of these representations is their lack of translation invariance: the content of wavelet subbands is unstable under translations of the input signal. Wavel ..."
Abstract

Cited by 557 (36 self)
 Add to MetaCart
Orthogonal wavelet transforms have recently become a popular representation for multiscale signal and image analysis. One of the major drawbacks of these representations is their lack of translation invariance: the content of wavelet subbands is unstable under translations of the input signal. Wavelet transforms are also unstable with respect to dilations of the input signal, and in two dimensions, rotations of the input signal. We formalize these problems by defining a type of translation invariance that we call "shiftability". In the spatial domain, shiftability corresponds to a lack of aliasing; thus, the conditions under which the property holds are specified by the sampling theorem. Shiftability may also be considered in the context of other domains, particularly orientation and scale. We explore "jointly shiftable" transforms that are simultaneously shiftable in more than one domain. Two examples of jointly shiftable transforms are designed and implemented: a onedimensional tran...
Results 1  10
of
33,480