Results 1  10
of
20
A Framework for Robust Subspace Learning
 International Journal of Computer Vision
, 2003
"... Many computer vision, signal processing and statistical problems can be posed as problems of learning low dimensional linear or multilinear models. These models have been widely used for the representation of shape, appearance, motion, etc, in computer vision applications. ..."
Abstract

Cited by 177 (10 self)
 Add to MetaCart
Many computer vision, signal processing and statistical problems can be posed as problems of learning low dimensional linear or multilinear models. These models have been widely used for the representation of shape, appearance, motion, etc, in computer vision applications.
Robust Parameterized Component Analysis: Theory and Applications to 2D Facial Modeling
 Computer Vision and Image Understanding, 91:53 – 71
, 2002
"... Principal Component Analysis (PCA) has been successfully applied to construct linear models of shape, graylevel, and motion. In particular, PCA has been widely used to model the variation in the appearance of people's faces. We extend previous work on facial modeling for tracking faces in video ..."
Abstract

Cited by 53 (12 self)
 Add to MetaCart
(Show Context)
Principal Component Analysis (PCA) has been successfully applied to construct linear models of shape, graylevel, and motion. In particular, PCA has been widely used to model the variation in the appearance of people's faces. We extend previous work on facial modeling for tracking faces in video sequences as they undergo significant changes due to facial expressions. Here we develop personspecific facial appearance models (PSFAM), which use modular PCA to model complex intraperson appearance changes. Such models require aligned visual training data; in previous work, this has involved a time consuming and errorprone hand alignment and cropping process. Instead, we introduce parameterized component analysis to learn a subspace that is invariant to affine (or higher order) geometric transformations. The automatic learning of a PSFAM given a training image sequence is posed as a continuous optimization problem and is solved with a mixture of stochastic and deterministic techniques achieving subpixel accuracy.
Dynamic Coupled Component Analysis
, 2001
"... We present a method for simultaneously learning linear models of multiple high dimensional data sets and the dependencies between them. For example, we learn asymmetrically coupled linear models for the faces of two dierent people and show how these models can be used to animate one face given a vid ..."
Abstract

Cited by 25 (6 self)
 Add to MetaCart
(Show Context)
We present a method for simultaneously learning linear models of multiple high dimensional data sets and the dependencies between them. For example, we learn asymmetrically coupled linear models for the faces of two dierent people and show how these models can be used to animate one face given a video sequence of the other. We pose the problem as a form of Asymmetric Coupled Component Analysis (ACCA) in which we simultaneously learn the subspaces for reducing the dimensionality of each dataset while coupling the parameters of the low dimensional representations.
Interpreting canonical correlation analysis through biplots of structural correlations and weights
 Psychometrika
, 1990
"... This paper extends the biplot technique to canonical correlation analysis and redundancy analysis, The plot of structure correlations is shown to be optimal for displaying the pairwise correlations between the variables of the one set and those of the second. The link between multivariate regression ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
(Show Context)
This paper extends the biplot technique to canonical correlation analysis and redundancy analysis, The plot of structure correlations is shown to be optimal for displaying the pairwise correlations between the variables of the one set and those of the second. The link between multivariate regression and canonical correlation analysis/redundancy analysis is exploited for producing an optimal biplot that displays a matrix of regression coefficients. This plot can be made from the canonical weights of the predictors and the structure correlations of the criterion variables. An example is used to show how the proposed biptots may be interpreted. Key words: biplot, canonical correlation analysis, canonical weight, interbattery factor analysis, partial analysis, redundancy analysis, regression coefficient, reduced rank regression, structure correlations.
Kronecker’s and Newton’s approaches to solving: A First Comparison
, 1999
"... In these pages we make a first attempt to compute efficiency of symbolic and numerical analysis procedures that solve systems of multivariate polynomial equations. In particular, we compare Kronecker’s solution (from the symbolic approach) with approximate zero theory (introduced by M. Shub & S ..."
Abstract

Cited by 14 (5 self)
 Add to MetaCart
In these pages we make a first attempt to compute efficiency of symbolic and numerical analysis procedures that solve systems of multivariate polynomial equations. In particular, we compare Kronecker’s solution (from the symbolic approach) with approximate zero theory (introduced by M. Shub & S. Smale as a foundation of numerical analysis). To this purpose we show upper and lower bounds of the bit length of approximate zeros. We also introduce efficient procedures that transform local Kronecker’s solution into approximate zeros and conversely. As an application of our study we exhibit an efficient procedure to compute splitting fields and Lagrange resolvent of univariate polynomial equations. We remark that this procedure is obtained by a convenient combination of both approaches (numeric and symbolic) to multivariate polynomial solving.
Biplots in reducedrank regression
 Biom. J
, 1994
"... SUMMARY Regression problems with a number of related response variables are typically analyzed by separate multiple regressions. This paper shows how these regressions can be visualized jointly in a biplot based on reducedrank regression. Reducedrank regression combines multiple regression and pri ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
SUMMARY Regression problems with a number of related response variables are typically analyzed by separate multiple regressions. This paper shows how these regressions can be visualized jointly in a biplot based on reducedrank regression. Reducedrank regression combines multiple regression and principal components analysis and can therefore be carried out with standard statistical packages. The proposed biplot highlights the major aspects of the regressions by displaying the leastsquares approximation of fitted values, regression coefficients and associated tratio's. The utility and interpretation of the reducedrank regression biplot is demonstrated with an example using public health data that were previously analyzed by separate multiple regressions.
Reconstructing Graphs from Neighborhood Data
"... Abstract—Consider a social network and suppose that we are given the number of common friends between each pair of users. Can we reconstruct the underlying network? Similarly, consider a set of documents and the words that appear in them. If we know the number of common words for every pair of docum ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Abstract—Consider a social network and suppose that we are given the number of common friends between each pair of users. Can we reconstruct the underlying network? Similarly, consider a set of documents and the words that appear in them. If we know the number of common words for every pair of documents, as well as the number of common documents for every pair of words, can we infer which words appear in which documents? In this paper, we develop a general methodology for answering questions like the ones above. We formalize these questions in what we call the RECONSTRUCT problem: Given information about the common neighbors of nodes in a network, our goal is to reconstruct the hidden binary matrix that indicates the presence or absence of relationships between individual nodes. We propose an effective and practical heuristic, which exploits properties of the singular value decomposition of the hidden binary matrix. More specifically, we show that using the available neighborhood information, we can reconstruct the hidden matrix by finding the components of its singular value decomposition and then combining them appropriately. Our extensive experimental study suggests that our methods are able to reconstruct binary matrices of different characteristics with up to 100 % accuracy. I.
A new solution to the additive constant problem in metric multidimensional scaling
 Psychometrika, 3'7, 31&quot;I  322
, 1972
"... A new solution to the additive constant problem in metric multidimensional scaling is developed. This solution determines, for a given dimensionality, the additive constant and the resulting stimulus projections on the dimensions of a Euclidean space which minimize the sum of squares of discrepanci ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
A new solution to the additive constant problem in metric multidimensional scaling is developed. This solution determines, for a given dimensionality, the additive constant and the resulting stimulus projections on the dimensions of a Euclidean space which minimize the sum of squares of discrepancies between the formal model for metric multidimensional scaling and the original data. A modification of FletcherPowell style functional iteration is used to compute solutions. A scale free index of the goodness of fit is developed to aid in selecting solutions of adequate dimensionality from multiple candidates. The additive constant problem was originally formulated as the problem of finding a constant, c, which converted the observed comparative interpoint distance between a pair of stimuli, h~, into an absolute interpoint distance~ d;~, in such a way as to minimize the dimensionality of the resulting Euclidean space in which the stimuli were to be represented. (1) h; ~ = d~Ac j,k = 1,2,..,n,j~k, (2) d; ~ = (a~, a~)~J where a ~ is the projection of the jth stimulus on the ruth dimension of a tdimensional Euclidean space. No solution has ever been found for this formulation of the problem. The first systematic reformulation of this problem was by Messick and Abelson [1956]. They worked on the relation of the additive constant to the elements of a matrix of scalar products, B*, among the stimulus projections. They consider the roots and vectors of B * resulting from an EckartYoung [1936] resolution. They note that in a "true " solution from this approach, there are a minimum number of large roots and the remaining roots are zero. With fallible data, this ideal is not achievable. Neither is it reasonable to assume that all errors would function so as to make the roots of B*, which correspond to zero roots in the "true " solution, positive. Messick
Ten statisticians and their impacts for psychologists
, 2009
"... ABSTRACT—Although psychologists frequently use statistical procedures, they are often unaware of the statisticians most associated with these procedures. Learning more about the people will aid understanding of the techniques. In this article, I present a ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
ABSTRACT—Although psychologists frequently use statistical procedures, they are often unaware of the statisticians most associated with these procedures. Learning more about the people will aid understanding of the techniques. In this article, I present a