Results 1  10
of
225,361
Lambertian Reflectance and Linear Subspaces
, 2000
"... We prove that the set of all reflectance functions (the mapping from surface normals to intensities) produced by Lambertian objects under distant, isotropic lighting lies close to a 9D linear subspace. This implies that, in general, the set of images of a convex Lambertian object obtained under a wi ..."
Abstract

Cited by 514 (20 self)
 Add to MetaCart
We prove that the set of all reflectance functions (the mapping from surface normals to intensities) produced by Lambertian objects under distant, isotropic lighting lies close to a 9D linear subspace. This implies that, in general, the set of images of a convex Lambertian object obtained under a
Computing the Dimension of Linear Subspaces
 In SOFSEM'2000: Theory and Practice of Informatics, Lecture Notes in Computer Science
, 2000
"... Since its very beginning, linear algebra is a highly algorithmic subject. Let us just mention the famous Gau Algorithm which was invented before the theory of algorithms has been developed. The purpose of this paper is to link linear algebra explicitly to computable analysis, that is the theory of c ..."
Abstract

Cited by 5 (5 self)
 Add to MetaCart
of computable real number functions. Especially, we will investigate in which sense the dimension of a given linear subspace can be computed. The answer highly depends on how the linear subspace is given: if it is given by a finite number of vectors whose linear span represents the space, then the dimension
Acquiring linear subspaces for face recognition under variable lighting
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2005
"... Previous work has demonstrated that the image variation of many objects (human faces in particular) under variable lighting can be effectively modeled by low dimensional linear spaces, even when there are multiple light sources and shadowing. Basis images spanning this space are usually obtained in ..."
Abstract

Cited by 302 (2 self)
 Add to MetaCart
vectors of a lowdimensional linear space, and that this subspace is close to those acquired by the other methods. More specifically, there exist configurations of k point light source directions, with k typically ranging from 5 to 9, such that by taking k images of an object under these single sources
Unified Linear Subspace . . .
"... The Basic Vector Space Model (BVSM) is well known in information retrieval. Unfortunately, it can not retrieve all relevant documents since it is based on literal term matching. The Generalized Vector Space Model (GVSM) and Latent Semantic Indexing (LSI) are two famous semantic retrieval methods, fo ..."
Abstract
 Add to MetaCart
The Basic Vector Space Model (BVSM) is well known in information retrieval. Unfortunately, it can not retrieve all relevant documents since it is based on literal term matching. The Generalized Vector Space Model (GVSM) and Latent Semantic Indexing (LSI) are two famous semantic retrieval methods, for which we assume there is some underlying latent semantic structure in a dataset. However, their assumptions about where the semantic structure locates are a bit strong. In this paper, we present a new understanding on the latent semantic space of a dataset from the dual perspective, which relaxes the above assumed conditions and leads naturally a unified kernel function for a class of vector space models and then makes its geometrical meanings clear. New semantic analysis methods based on the unified kernel function are developed, which combine the advantages of LSI and GVSM. We also present a mathematical theorem that the new methods possess the stable property of the rank choice. The experimental results of our methods are promising on the standard test sets.
Mixtures of Linear Subspaces for Face Detection
, 1999
"... We present two methods using mixtures of linear subspaces for face detection in gray level images. One method uses a mixture of factor analyzers to concurrently perform clustering and, within each cluster, perform local dimensionality reduction. The parameters of the mixture model are estimated usin ..."
Abstract

Cited by 19 (3 self)
 Add to MetaCart
We present two methods using mixtures of linear subspaces for face detection in gray level images. One method uses a mixture of factor analyzers to concurrently perform clustering and, within each cluster, perform local dimensionality reduction. The parameters of the mixture model are estimated
Numerical methods for computing angles between linear subspaces
, 1971
"... Assume that two subspaces F and G of a unitary space are defined.. as the ranges(or nullspacd of given rectangular matrices A and B. Accurate numerical methods are developed for computing the principal angles ek(F,G) and orthogonal sets of principal vectors u k 6 F and vk c G, k = 1,2,..., q = d ..."
Abstract

Cited by 164 (3 self)
 Add to MetaCart
Assume that two subspaces F and G of a unitary space are defined.. as the ranges(or nullspacd of given rectangular matrices A and B. Accurate numerical methods are developed for computing the principal angles ek(F,G) and orthogonal sets of principal vectors u k 6 F and vk c G, k = 1,2,..., q
ÉTALE COHOMOLOGY OF THE COMPLEMENT OF A LINEAR SUBSPACE ARRANGEMENT
, 709
"... Abstract. We prove a formula for the cup product on the ℓadic cohomology of the complement of a linear subspace arrangement. ..."
Abstract
 Add to MetaCart
Abstract. We prove a formula for the cup product on the ℓadic cohomology of the complement of a linear subspace arrangement.
Banded householder representation of linear subspaces
"... We show how to compactly represent any ndimensional subspace of R m as a banded product of Householder reflections using n(m − n) floating point numbers. This is optimal since these subspaces form a Grassmannian space Grn(m) of dimension n(m − n). The representation is stable and easy to compute: a ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
: any matrix can be factored into the product of a banded Householder matrix and a square matrix using two to three QR decompositions. Keywords: banded, linear subspace, orthogonal matrix, Householder reflection If m ≥ n, the Householder QR algorithm represents an m × n orthogonal matrix U as a product
D.: Visual tracking using learned linear subspaces
 In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition
, 2004
"... This paper presents a simple but robust visual tracking algorithm based on representing the appearances of objects using affine warps of learned linear subspaces of the image space. The tracker adaptively updates this subspace while tracking by finding a linear subspace that best approximates the ob ..."
Abstract

Cited by 57 (7 self)
 Add to MetaCart
This paper presents a simple but robust visual tracking algorithm based on representing the appearances of objects using affine warps of learned linear subspaces of the image space. The tracker adaptively updates this subspace while tracking by finding a linear subspace that best approximates
Mixtures of Local Linear Subspaces for Face Recognition
 in Proceedings of The IEEE Conference on Computer Vision and Pattern Recognition
, 1998
"... Traditional subspace methods for face recognition compute a measure of similarity between images after projecting them onto a fixed linear subspace that is spanned by some principal component vectors (a.k.a. "eigenfaces") of a training set of images. By supposing a parametric Gaussian dist ..."
Abstract

Cited by 34 (5 self)
 Add to MetaCart
Traditional subspace methods for face recognition compute a measure of similarity between images after projecting them onto a fixed linear subspace that is spanned by some principal component vectors (a.k.a. "eigenfaces") of a training set of images. By supposing a parametric Gaussian
Results 1  10
of
225,361