Results 1  10
of
137,268
Clayton copula and mixture decomposition
 In ASMDA 2005
, 2005
"... Abstract. A symbolic variable is often described by a histogram. More generally, it can be provided in the form of a continuous distribution. In this case, the problem is to solve the most frequent problem in data mining, namely: to classify the objects starting from the description of the variables ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Abstract. A symbolic variable is often described by a histogram. More generally, it can be provided in the form of a continuous distribution. In this case, the problem is to solve the most frequent problem in data mining, namely: to classify the objects starting from the description of the variables in the form of continuous distributions. A solution is to sample each distribution in a number N of points, and to evaluate the joint distribution of these values using the copulas, and also to adapt the dynamical clustering (nuées dynamiques) method to these joint densities. In this paper we compare the Clayton copula and the Normal copula for more than 2 dimensions, and we compare results of clustering by using on the one hand the method based on the Clayton copula and traditional methods (MCLUST, and Kmeans). Our comparison is based on 2 wellknown classical data files.
Mixtures of Probabilistic Principal Component Analysers
, 1998
"... Principal component analysis (PCA) is one of the most popular techniques for processing, compressing and visualising data, although its effectiveness is limited by its global linearity. While nonlinear variants of PCA have been proposed, an alternative paradigm is to capture data complexity by a com ..."
Abstract

Cited by 537 (6 self)
 Add to MetaCart
combination of local linear PCA projections. However, conventional PCA does not correspond to a probability density, and so there is no unique way to combine PCA models. Previous attempts to formulate mixture models for PCA have therefore to some extent been ad hoc. In this paper, PCA is formulated within a
Probabilistic Latent Semantic Analysis
 In Proc. of Uncertainty in Artificial Intelligence, UAI’99
, 1999
"... Probabilistic Latent Semantic Analysis is a novel statistical technique for the analysis of twomode and cooccurrence data, which has applications in information retrieval and filtering, natural language processing, machine learning from text, and in related areas. Compared to standard Latent Sema ..."
Abstract

Cited by 760 (9 self)
 Add to MetaCart
Semantic Analysis which stems from linear algebra and performs a Singular Value Decomposition of cooccurrence tables, the proposed method is based on a mixture decomposition derived from a latent class model. This results in a more principled approach which has a solid foundation in statistics. In order
Visual Comparison of Datasets using Mixture Decompositions
"... We describe how a mixture of two densities f 0 and f 1 may be decomposed into a different mixture consisting of three densities. These new densities, f + , f \Gamma , and f= , summarize differences between f 0 and f 1 : f + is high in areas of excess of f 1 compared to f 0 ; f \Gamma represents defi ..."
Abstract
 Add to MetaCart
deficiency of f 1 compared to f 0 in the same way; f= represents commonality between f 1 and f 0 . The supports of f+ and f \Gamma are disjoint. This decomposition of the mixture of f 0 and f 1 is similar to the settheoretic decomposition of the union of two sets A and B into the disjoint sets AnB, Bn
Visual Comparison of Datasets Using Mixture Decompositions
"... This article describes how a mixture of two densities, f0 and f1, may be decomposed into a different mixture consisting of three densities. These new densities, f+, f−, and f=, summarize differences between f0 and f1: f+ is high in areas of excess of f1 compared to f0; f − represents deficiency of f ..."
Abstract
 Add to MetaCart
of f1 compared to f0 in the same way; f = represents commonality between f1 and f0. The supports of f+ and f − are disjoint. This decomposition of the mixture of f0 and f1 is similar to the settheoretic decomposition of the union of two sets A and B into the disjoint sets A\B, B\A, and A ∩ B. Sample
A multilinear singular value decomposition
 SIAM J. Matrix Anal. Appl
, 2000
"... Abstract. We discuss a multilinear generalization of the singular value decomposition. There is a strong analogy between several properties of the matrix and the higherorder tensor decomposition; uniqueness, link with the matrix eigenvalue decomposition, firstorder perturbation effects, etc., are ..."
Abstract

Cited by 467 (20 self)
 Add to MetaCart
Abstract. We discuss a multilinear generalization of the singular value decomposition. There is a strong analogy between several properties of the matrix and the higherorder tensor decomposition; uniqueness, link with the matrix eigenvalue decomposition, firstorder perturbation effects, etc
Image denoising using a scale mixture of Gaussians in the wavelet domain
 IEEE TRANS IMAGE PROCESSING
, 2003
"... We describe a method for removing noise from digital images, based on a statistical model of the coefficients of an overcomplete multiscale oriented basis. Neighborhoods of coefficients at adjacent positions and scales are modeled as the product of two independent random variables: a Gaussian vecto ..."
Abstract

Cited by 514 (17 self)
 Add to MetaCart
We describe a method for removing noise from digital images, based on a statistical model of the coefficients of an overcomplete multiscale oriented basis. Neighborhoods of coefficients at adjacent positions and scales are modeled as the product of two independent random variables: a Gaussian vector and a hidden positive scalar multiplier. The latter modulates the local variance of the coefficients in the neighborhood, and is thus able to account for the empirically observed correlation between the coefficient amplitudes. Under this model, the Bayesian least squares estimate of each coefficient reduces to a weighted average of the local linear estimates over all possible values of the hidden multiplier variable. We demonstrate through simulations with images contaminated by additive white Gaussian noise that the performance of this method substantially surpasses that of previously published methods, both visually and in terms of mean squared error.
Spectral Mixture Decomposition by Least Dependent Component Analysis
, 2005
"... A recently proposed mutual information based algorithm for decomposing data into least dependent components (MILCA) is applied to spectral analysis, namely to blind recovery of concentrations and pure spectra from their linear mixtures. The algorithm is based on precise estimates of mutual informati ..."
Abstract
 Add to MetaCart
. In combination with second derivative preprocessing and alternating least squares postprocessing, MILCA shows decomposition performance comparable with or superior to specialized chemometrics algorithms. The results are illustrated on a number of simulated and experimental (infrared and Raman) mixture problems
Unsupervised Learning by Probabilistic Latent Semantic Analysis
 Machine Learning
, 2001
"... Abstract. This paper presents a novel statistical method for factor analysis of binary and count data which is closely related to a technique known as Latent Semantic Analysis. In contrast to the latter method which stems from linear algebra and performs a Singular Value Decomposition of cooccurren ..."
Abstract

Cited by 612 (4 self)
 Add to MetaCart
occurrence tables, the proposed technique uses a generative latent class model to perform a probabilistic mixture decomposition. This results in a more principled approach with a solid foundation in statistical inference. More precisely, we propose to make use of a temperature controlled version of the Expectation
Categorization of Image Databases for Efficient Retrieval Using Robust Mixture Decomposition
, 1998
"... In this paper, we present a robust mixture decomposition technique that automatically finds a compact representation of the data in terms of categories. We apply it to the problem of organizing databases for efficient retrieval. The time taken for retrieval is shown to be an order of magnitude small ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
In this paper, we present a robust mixture decomposition technique that automatically finds a compact representation of the data in terms of categories. We apply it to the problem of organizing databases for efficient retrieval. The time taken for retrieval is shown to be an order of magnitude
Results 1  10
of
137,268