Results 1  10
of
125
A Multibody Factorization Method for Motion Analysis
, 1995
"... The structurefrommotion problem has been extensively studied in the field of computer vision. Yet, the bulk of the existing work assumes that the scene contains only a single moving object. The more realistic case where an unknown number of objects move in the scene has received little attention, ..."
Abstract

Cited by 167 (2 self)
 Add to MetaCart
The structurefrommotion problem has been extensively studied in the field of computer vision. Yet, the bulk of the existing work assumes that the scene contains only a single moving object. The more realistic case where an unknown number of objects move in the scene has received little attention, especially for its theoretical treatment. In this paper we present a new method for separating and recovering the motion and shape of multiple independently moving objects in a sequence of images. The method does not require prior knowledge of the number of objects, nor is dependent on any grouping of features into an object at the image level. For this purpose, we introduce a mathematical construct of object shapes, called the shape interaction matrix, which is invariant to both the object motions and the selection of coordinate systems. This invariant structure is computable solely from the observed trajectories of image features without grouping them into individual objects. Once the matr...
Principal Component Analysis
 (IN PRESS, 2010). WILEY INTERDISCIPLINARY REVIEWS: COMPUTATIONAL STATISTICS, 2
, 2010
"... Principal component analysis (pca) is a multivariate technique that analyzes a data table in which observations are described by several intercorrelated quantitative dependent variables. Its goal is to extract the important information from the table, to represent it as a set of new orthogonal var ..."
Abstract

Cited by 130 (7 self)
 Add to MetaCart
Principal component analysis (pca) is a multivariate technique that analyzes a data table in which observations are described by several intercorrelated quantitative dependent variables. Its goal is to extract the important information from the table, to represent it as a set of new orthogonal variables called principal components, and to display the pattern of similarity of the observations and of the variables as points in maps. The quality of the pca model can be evaluated using crossvalidation techniques such as the bootstrap and the jackknife. Pca can be generalized as correspondence analysis (ca) in order to handle qualitative variables and as multiple factor analysis (mfa) in order to handle heterogenous sets of variables. Mathematically, pca depends upon the eigendecomposition of positive semidefinite matrices and upon the singular value decomposition (svd) of rectangular matrices.
Two purposes for matrix factorization: A historical appraisal
 SIAM Review
"... Abstract. Matrix factorization in numerical linear algebra (NLA) typically serves the purpose of restating some given problem in such a way that it can be solved more readily; for example, one major application is in the solution of a linear system of equations. In contrast, within applied statistic ..."
Abstract

Cited by 25 (1 self)
 Add to MetaCart
Abstract. Matrix factorization in numerical linear algebra (NLA) typically serves the purpose of restating some given problem in such a way that it can be solved more readily; for example, one major application is in the solution of a linear system of equations. In contrast, within applied statistics/psychometrics (AS/P), a much more common use for matrix factorization is in presenting, possibly spatially, the structure that may be inherent in a given data matrix obtained on a collection of objects observed over a set of variables. The actual components of a factorization are now of prime importance and not just as a mechanism for solving another problem. We review some connections between NLA and AS/P and their respective concerns with matrix factorization and the subsequent rank reduction of a matrix. We note in particular that several results available for many decades in AS/P were more recently (re)discovered in the NLA literature. Two other distinctions between NLA and AS/P are also discussed briefly: how a generalized singular value decomposition might be defined, and the differing uses for the (newer) methods of optimization based on cyclic or iterative projections.
Modeling the functional organization of the visual cortex
 Physica D
, 1996
"... While many models of the dynamics and interactions of single neurons are extant, analogous constructs which attempt to describe largescale (_>O(108)) neuronal activity are few and far between. Optical imaging of the visual cortex makes such macroscopic neuronal activity accessible. Symmetries la ..."
Abstract

Cited by 18 (9 self)
 Add to MetaCart
(Show Context)
While many models of the dynamics and interactions of single neurons are extant, analogous constructs which attempt to describe largescale (_>O(108)) neuronal activity are few and far between. Optical imaging of the visual cortex makes such macroscopic neuronal activity accessible. Symmetries latent in the cortical architecture are used here to develop a scheme for analyzing such images. In this way, intrinsic modes of cortical response can be uncovered, using minimal assumptions. Some of these modes correspond to alreadyfamiliar features of the functional architecture, and it is highly likely that others hold physiological relevance as well. Finally, the number of such modes that would be required in a more fully developed model (incorporating cortical dynamics) is approximated.
2012): CorePeriphery Structure in the Overnight Money Market: Evidence from the eMID Trading Platform, Kiel working paper 1759, Kiel Institute for the World
"... We explore the network topology arising from a dataset of the overnight interbank transactions on the eMID trading platform from January 1999 to December 2010. In order to shed light on the hierarchical structure of the banking system, we estimate di erent versions of a coreperiphery model. Our ma ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
We explore the network topology arising from a dataset of the overnight interbank transactions on the eMID trading platform from January 1999 to December 2010. In order to shed light on the hierarchical structure of the banking system, we estimate di erent versions of a coreperiphery model. Our main ndings are: (1) A coreperiphery structure provides a better t for these interbank data than alternative network models, (2) the identi ed core is quite stable over time, consisting of roughly 28 % of all banks before the global nancial crisis and 23 % afterwards, (3) the majority of core banks can be classi ed as intermediaries, i.e. as banks both borrowing and lending money, (4) allowing for asymmetric `coreness ' with respect to lending and borrowing considerably improves the t, and reveals more concentration in borrowing than lending activity of money center banks. During the nancial crisis of 2008, the reduction of interbank lending was mainly due to core banks ' reducing their numbers of active outgoing links.
On the dimensionality of face space
 IEEE Transactions of Pattern Analysis and Machine Intelligence
"... Abstract—The dimensionality of face space is measured objectively in a psychophysical study. Within this framework, we obtain a measurement of the dimension for the human visual system. Using an eigenface basis, evidence is presented that talented human observers are able to identify familiar faces ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
(Show Context)
Abstract—The dimensionality of face space is measured objectively in a psychophysical study. Within this framework, we obtain a measurement of the dimension for the human visual system. Using an eigenface basis, evidence is presented that talented human observers are able to identify familiar faces that lie in a space of roughly 100 dimensions and the average observer requires a space of between 100 and 200 dimensions. This is below most current estimates. It is further argued that these estimates give an upper bound for face space dimension and this might be lowered by better constructed “eigenfaces ” and by talented observers. Index Terms—Face and gesture recognition, computational models of vision, psychology, singular value decomposition. 1
A new data processing inequality and its applications in distributed source and channel coding
 IEEE TRANS. INF. THEORY
, 2011
"... In the distributed coding of correlated sources, the problem of characterizing the joint probability distribution of a pair of random variables satisfying an nletter Markov chain arises. The exact solution of this problem is intractable. In this paper, we seek a singleletter necessary condition f ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
(Show Context)
In the distributed coding of correlated sources, the problem of characterizing the joint probability distribution of a pair of random variables satisfying an nletter Markov chain arises. The exact solution of this problem is intractable. In this paper, we seek a singleletter necessary condition for this nletter Markov chain. To this end, we propose a new data processing inequality on a new measure of correlation through a spectral method. Based on this new data processing inequality, we provide a singleletter necessary condition for the required joint probability distribution. We apply our results to two specific examples involving the distributed coding of correlated sources: multipleaccess channel with correlated sources and multiterminal ratedistortion region, and propose new necessary conditions for these two problems.
Nonnegative Matrix Factorization via RankOne Downdate
"... Nonnegative matrix factorization (NMF) was popularized as a tool for data mining by Lee and Seung in 1999. NMF attempts to approximate a matrix with nonnegative entries by a product of two lowrank matrices, also with nonnegative entries. We propose an algorithm called rankone downdate (R1D) for co ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
(Show Context)
Nonnegative matrix factorization (NMF) was popularized as a tool for data mining by Lee and Seung in 1999. NMF attempts to approximate a matrix with nonnegative entries by a product of two lowrank matrices, also with nonnegative entries. We propose an algorithm called rankone downdate (R1D) for computing an NMF that is partly motivated by the singular value decomposition. This algorithm computes the dominant singular values and vectors of adaptively determined submatrices of a matrix. On each iteration, R1D extracts a rankone submatrix from the original matrix according to an objective function. We establish a theoretical result that maximizing this objective function corresponds to correctly classifying articles in a nearly separable corpus. We also provide computational experiments showing the success of this method in identifying features in realistic datasets. The method is also much faster than other NMF routines. 1. Nonnegative Matrix Factorization Several problems in information retrieval can be posed as lowrank matrix approximation. The seminal paper by Deerwester et al. (1990) on latent semantic indexing (LSI) showed that approximating a termdocument matrix describing a corpus of articles via the SVD led to powerful query and classification techniques. A drawback of LSI is that the lowrank factors in general will have both positive and negative entries, and there is no obvious statistical interpretation of the negative entries. This led Lee and Seung (1999) among others to propose nonnegative matrix