Results 1  10
of
2,352
A Linear Solution to 1Dimensional Subspace Fitting under Incomplete Data
"... Abstract. Computing a 1dimensional linear subspace is an important problem in many computer vision algorithms. Its importance stems from the fact that maximizing a linear homogeneous equation system can be interpreted as subspace fitting problem. It is trivial to compute the solution if all coeffic ..."
Abstract
 Add to MetaCart
Abstract. Computing a 1dimensional linear subspace is an important problem in many computer vision algorithms. Its importance stems from the fact that maximizing a linear homogeneous equation system can be interpreted as subspace fitting problem. It is trivial to compute the solution if all
Sparse subspace clustering
 In CVPR
, 2009
"... We propose a method based on sparse representation (SR) to cluster data drawn from multiple lowdimensional linear or affine subspaces embedded in a highdimensional space. Our method is based on the fact that each point in a union of subspaces has a SR with respect to a dictionary formed by all oth ..."
Abstract

Cited by 241 (14 self)
 Add to MetaCart
We propose a method based on sparse representation (SR) to cluster data drawn from multiple lowdimensional linear or affine subspaces embedded in a highdimensional space. Our method is based on the fact that each point in a union of subspaces has a SR with respect to a dictionary formed by all
A Growing Neural Gas Network Learns Topologies
 Advances in Neural Information Processing Systems 7
, 1995
"... An incremental network model is introduced which is able to learn the important topological relations in a given set of input vectors by means of a simple Hebblike learning rule. In contrast to previous approaches like the "neural gas" method of Martinetz and Schulten (1991, 1994), this m ..."
Abstract

Cited by 401 (5 self)
 Add to MetaCart
data is available but no information on the desired output. What can the goal of learning be in this situation? One possible objective is dimensionality reduction: finding a lowdimensional subspace of the input vector space containing most or all of the input data. Linear subspaces with this property
The invariant factors of the incidence matrices of points and subspaces
 Trans. Amer. Math. Soc
, 2006
"... Abstract. We determine the Smith normal forms of the incidence matrices of points and projective (r − 1)dimensional subspaces of PG(n, q) andofthe incidence matrices of points and rdimensional affine subspaces of AG(n, q)for all n, r, and arbitrary prime power q. 1. ..."
Abstract

Cited by 12 (8 self)
 Add to MetaCart
Abstract. We determine the Smith normal forms of the incidence matrices of points and projective (r − 1)dimensional subspaces of PG(n, q) andofthe incidence matrices of points and rdimensional affine subspaces of AG(n, q)for all n, r, and arbitrary prime power q. 1.
Hyperspectral image classification and dimensionality reduction: an orthogonal subspace projection approach
 IEEE Transactions on Geoscience and Remote Sensing
, 1994
"... AbstructMost applications of hyperspectral imagery require processing techniques which achieve two fundamental goals: 1) detect and classify the constituent materials for each pixel in the scene; 2) reduce the data volumeldimensionality, without loss of critical information, so that it can be proc ..."
Abstract

Cited by 187 (16 self)
 Add to MetaCart
tonoise ratio and results in a single component image that represents a classification for the signature of interest. The orthogonal subspace projection (OSP) operator can be extended to k signatures of interest, thus reducing the dimensionality of k and classifying the hyperspectral image simultaneously
Covering lattice points by subspaces
 PERIOD. MATH. HUNGAR
, 2001
"... We find tight estimates for the minimum number of proper subspaces needed to cover all lattice points in an ndimensional convex body C, symmetric about the origin 0. This enables us to prove the following statement, which settles a problem of G. Halász. The maximum number of nwise linearly indepe ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
independent lattice points in the ndimensional ball rB n of radius r around 0 is O(r n/(n−1)). This bound cannot be improved. We also show that the order of magnitude of the number of different (n − 1)dimensional subspaces induced by the lattice points in rB n is r n(n−1).
An alternative to compactification
 Phy. Rev. Lett
, 1999
"... Conventional wisdom states that Newton’s force law implies only four noncompact dimensions. We demonstrate that this is not necessarily true in the presence of a nonfactorizable background geometry. The specific example we study is a single 3brane embedded in five dimensions. We show that even wi ..."
Abstract

Cited by 186 (1 self)
 Add to MetaCart
, this can be avoided if the Standard Model is confined to a (3 + 1)dimensional subspace, or “3brane”, in the higher dimensions
On Beamforming with Finite Rate Feedback in Multiple Antenna Systems
, 2003
"... In this paper, we study a multiple antenna system where the transmitter is equipped with quantized information about instantaneous channel realizations. Assuming that the transmitter uses the quantized information for beamforming, we derive a universal lower bound on the outage probability for any f ..."
Abstract

Cited by 272 (14 self)
 Add to MetaCart
between any two beamforming vectors in the beamformer codebook, and is equivalent to the problem of designing unitary space time codes under certain conditions. Finally, we show that good beamformers are good packings of 2dimensional subspaces in a 2tdimensional real Grassmannian manifold with chordal
Multidimensional Independent Component Analysis.
 In Proc. Int. Workshop on HigherOrder Stat
, 1998
"... This discussion paper proposes to generalize the notion of Independent Component Analysis (ICA) to the notion of Multidimensional Independent Component Analysis (MICA). We start from the ICA or blind source separation (BSS) model and show that it can be uniquely identified provided it is properly p ..."
Abstract

Cited by 257 (15 self)
 Add to MetaCart
parameterized in terms of onedimensional subspaces. From this standpoint, the BSS/ICA model is generalized to multidimensional components. We discuss how ICA standard algorithms can be adapted to MICA decomposition. The relevance of these ideas is illustrated by a MICA decomposition of ECG signals. 1. BLIND
Tensor subspace analysis
 In Advances in Neural Information Processing Systems 18 (NIPS
, 2005
"... Previous work has demonstrated that the image variations of many objects (human faces in particular) under variable lighting can be effectively modeled by low dimensional linear spaces. The typical linear subspace learning algorithms include Principal Component Analysis (PCA), Linear Discriminant An ..."
Abstract

Cited by 65 (4 self)
 Add to MetaCart
Analysis (LDA), and Locality Preserving Projection (LPP). All of these methods consider an n1 × n2 image as a high dimensional vector in R n1×n2, while an image represented in the plane is intrinsically a matrix. In this paper, we propose a new algorithm called Tensor Subspace Analysis (TSA). TSA considers
Results 1  10
of
2,352