Results 1  10
of
470
Acquiring linear subspaces for face recognition under variable lighting
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2005
"... Previous work has demonstrated that the image variation of many objects (human faces in particular) under variable lighting can be effectively modeled by low dimensional linear spaces, even when there are multiple light sources and shadowing. Basis images spanning this space are usually obtained in ..."
Abstract

Cited by 317 (2 self)
 Add to MetaCart
(Show Context)
Previous work has demonstrated that the image variation of many objects (human faces in particular) under variable lighting can be effectively modeled by low dimensional linear spaces, even when there are multiple light sources and shadowing. Basis images spanning this space are usually obtained in one of three ways: A large set of images of the object under different lighting conditions is acquired, and principal component analysis (PCA) is used to estimate a subspace. Alternatively, synthetic images are rendered from a 3D model (perhaps reconstructed from images) under point sources, and again PCA is used to estimate a subspace. Finally, images rendered from a 3D model under diffuse lighting based on spherical harmonics are directly used as basis images. In this paper, we show how to arrange physical lighting so that the acquired images of each object can be directly used as the basis vectors of a lowdimensional linear space, and that this subspace is close to those acquired by the other methods. More specifically, there exist configurations of k point light source directions, with k typically ranging from 5 to 9, such that by taking k images of an object under these single sources, the resulting subspace is an effective representation for recognition under a wide range of lighting conditions. Since the subspace is generated directly from real images, potentially complex and/or brittle intermediate steps such as 3D reconstruction can be completely avoided; nor is it necessary to acquire large numbers of training images or to physically construct complex diffuse (harmonic) light fields. We validate the use of subspaces constructed in this fashion within the context of face recognition.
Allfrequency shadows using nonlinear wavelet lighting approximation
 ACM Transactions on Graphics
, 2003
"... We present a method, based on precomputed light transport, for realtime rendering of objects under allfrequency, timevarying illumination represented as a highresolution environment map. Current techniques are limited to small area lights, with sharp shadows, or large lowfrequency lights, with ..."
Abstract

Cited by 188 (25 self)
 Add to MetaCart
We present a method, based on precomputed light transport, for realtime rendering of objects under allfrequency, timevarying illumination represented as a highresolution environment map. Current techniques are limited to small area lights, with sharp shadows, or large lowfrequency lights, with very soft shadows. Our main contribution is to approximate the environment map in a wavelet basis, keeping only the largest terms (this is known as a nonlinear approximation). We obtain further compression by encoding the light transport matrix sparsely but accurately in the same basis. Rendering is performed by multiplying a sparse light vector by a sparse transport matrix, which is very fast. For accurate rendering, using nonlinear wavelets is an order of magnitude faster than using linear spherical harmonics, the current best technique.
Skinning mesh animations.
 Proceedings of SIGGRAPH
, 2005
"... Abstract We extend approaches for skinning characters to the general setting of skinning deformable mesh animations. We provide an automatic algorithm for generating progressive skinning approximations, that is particularly efficient for pseudoarticulated motions. Our contributions include the use ..."
Abstract

Cited by 134 (6 self)
 Add to MetaCart
Abstract We extend approaches for skinning characters to the general setting of skinning deformable mesh animations. We provide an automatic algorithm for generating progressive skinning approximations, that is particularly efficient for pseudoarticulated motions. Our contributions include the use of nonparametric mean shift clustering of highdimensional mesh rotation sequences to automatically identify statistically relevant bones, and robust least squares methods to determine bone transformations, bonevertex influence sets, and vertex weight values. We use a lowrank data reduction model defined in the undeformed mesh configuration to provide progressive convergence with a fixed number of bones. We show that the resulting skinned animations enable efficient hardware rendering, rest pose editing, and deformable collision detection. Finally, we present numerous examples where skins were automatically generated using a single set of parameter values.
Clustered principal components for precomputed radiance transfer”, SIGGRAPH
, 2003
"... We compress storage and accelerate performance of precomputed radiance transfer (PRT), which captures the way an object shadows, scatters, and reflects light. PRT records over many surface points a transfer matrix. At runtime, this matrix transforms a vector of spherical harmonic coefficients repre ..."
Abstract

Cited by 128 (7 self)
 Add to MetaCart
We compress storage and accelerate performance of precomputed radiance transfer (PRT), which captures the way an object shadows, scatters, and reflects light. PRT records over many surface points a transfer matrix. At runtime, this matrix transforms a vector of spherical harmonic coefficients representing distant, lowfrequency source lighting into exiting radiance. Perpoint transfer matrices form a highdimensional surface signal that we compress using clustered principal component analysis (CPCA), which partitions many samples into fewer clusters each approximating the signal as an affine subspace. CPCA thus reduces the highdimensional transfer signal to a lowdimensional set of perpoint weights on a percluster set of representative matrices. Rather than computing a weighted sum of representatives and applying this result to the lighting, we apply the representatives to the lighting percluster (on the CPU) and weight these results perpoint (on the GPU). Since the output of the matrix is lowerdimensional than the matrix itself, this reduces computation. We also increase the accuracy of encoded radiance functions with a new leastsquares optimal projection of spherical harmonics onto the hemisphere. We describe an implementation on graphics hardware that performs realtime rendering of glossy objects with dynamic selfshadowing and interreflection without fixing the view or light as in previous work. Our approach also allows significantly increased lighting frequency when rendering diffuse objects and includes subsurface scattering.
Fast separation of direct and global components of a scene using high frequency illumination.
 ACM Transactions on Graphics,
, 2006
"... The scene includes a wide variety of physical phenomena that produce complex global illumination effects. We present several methods for separating the (b) direct and (c) global illumination components of the scene using high frequency illumination. In this example, the components were estimated by ..."
Abstract

Cited by 127 (20 self)
 Add to MetaCart
The scene includes a wide variety of physical phenomena that produce complex global illumination effects. We present several methods for separating the (b) direct and (c) global illumination components of the scene using high frequency illumination. In this example, the components were estimated by shifting a single checkerboard pattern 25 times to overcome the optical and resolution limits of the source (projector) and sensor (camera). The direct and global images have been brightness scaled by a factor of 1.25. In theory, the separation can be done using just 2 images. When the separation results are only needed at a resolution that is lower than those of the source and sensor, the separation can be done with a single image. Abstract We present fast methods for separating the direct and global illumination components of a scene measured by a camera and illuminated by a light source. In theory, the separation can be done with just two images taken with a high frequency binary illumination pattern and its complement. In practice, a larger number of images are used to overcome the optical and resolution limitations of the camera and the source. The approach does not require the material properties of objects and media in the scene to be known. However, we require that the illumination frequency is high enough to adequately sample the global components received by scene points. We present separation results for scenes that include complex interreflections, subsurface scattering and volumetric scattering. Several variants of the separation approach are also described. When a sinusoidal illumination pattern is used with different phase shifts, the separation can be done using just three images. When the computed images are of lower resolution than the source and the camera, smoothness constraints are used to perform the separation using a single image. Finally, in the case of a static scene that is lit by a simple point source, such as the sun, a moving occluder and a video camera can be used to do the separation. We also show several simple examples of how novel images of a scene can be computed from the separation results.
Triple Product Wavelet Integrals for AllFrequency Relighting
, 2004
"... This paper focuses on efficient rendering based on precomputed light transport, with realistic materials and shadows under allfrequency direct lighting such as environment maps. The basic difficulty is representation and computation in the 6D space of light direction, view direction, and surface po ..."
Abstract

Cited by 109 (9 self)
 Add to MetaCart
This paper focuses on efficient rendering based on precomputed light transport, with realistic materials and shadows under allfrequency direct lighting such as environment maps. The basic difficulty is representation and computation in the 6D space of light direction, view direction, and surface position. While imagebased and synthetic methods for realtime rendering have been proposed, they do not scale to high sampling rates with variation of both lighting and viewpoint. Current approaches are therefore limited to lower dimensionality (only lighting or viewpoint variation, not both) or lower sampling rates (low frequency lighting and materials) . We propose a new mathematical and computational analysis of precomputed light transport. We use factored forms, separately precomputing and representing visibility and material properties. Rendering then requires computing triple product integrals at each vertex, involving the lighting, visibility and BRDF. Our main contribution is a general analysis of these triple product integrals, which are likely to have broad applicability in computer graphics and numerical analysis. We first determine the computational complexity in a number of bases like point samples, spherical harmonics and wavelets. We then give efficient linear and sublineartime algorithms for Haar wavelets, incorporating nonlinear wavelet approximation of lighting and BRDFs. Practically, we demonstrate rendering of images under new lighting and viewing conditions in a few seconds, significantly faster than previous techniques.
A Frequency Analysis of Light Transport
, 2005
"... We present a signalprocessing framework for light transport. We study the frequency content of radiance and how it is altered by phenomena such as shading, occlusion, and transport. This extends previous work that considered either spatial or angular dimensions, and it offers a comprehensive treatm ..."
Abstract

Cited by 106 (16 self)
 Add to MetaCart
(Show Context)
We present a signalprocessing framework for light transport. We study the frequency content of radiance and how it is altered by phenomena such as shading, occlusion, and transport. This extends previous work that considered either spatial or angular dimensions, and it offers a comprehensive treatment of both space and angle. We show that occlusion, a multiplication in the primal, amounts in the Fourier domain to a convolution by the spectrum of the blocker. Propagation corresponds to a shear in the spaceangle frequency domain, while reflection on curved objects performs a different shear along the angular frequency axis. As shown by previous work, reflection is a convolution in the primal and therefore a multiplication in the Fourier domain. Our work shows how the spatial components of lighting are affected by this angular convolution. Our framework predicts the characteristics of interactions such as caustics and the disappearance of the shadows of small features. Predictions on the frequency content can then be used to control sampling rates for rendering. Other potential applications include precomputed radiance transfer and inverse rendering.
Structured Importance Sampling of Environment Maps
, 2003
"... We introduce structured importance sampling, a new technique for efficiently rendering scenes illuminated by distant natural illumination given in an environment map. Our method handles occlusion, highfrequency lighting, and is significantly faster than alternative methods based on Monte Carlo samp ..."
Abstract

Cited by 102 (9 self)
 Add to MetaCart
We introduce structured importance sampling, a new technique for efficiently rendering scenes illuminated by distant natural illumination given in an environment map. Our method handles occlusion, highfrequency lighting, and is significantly faster than alternative methods based on Monte Carlo sampling. We achieve this speedup as a result of several ideas. First, we present a new metric for stratifying and sampling an environment map taking into account both the illumination intensity as well as the expected variance due to occlusion within the scene. We then present a novel hierarchical stratification algorithm that uses our metric to automatically stratify the environment map into regular strata. This approach enables a number of rendering optimizations, such as preintegrating the illumination within each stratum to eliminate noise at the cost of adding bias, and sorting the strata to reduce the number of sample rays. We have rendered several scenes illuminated by natural lighting, and our results indicate that structured importance sampling is better than the best previous Monte Carlo techniques, requiring one to two orders of magnitude fewer samples for the same image quality.
Precomputing interactive dynamic deformable scenes
 ACM Trans. Graph
, 2003
"... dynamics by driving the scene with parameterized interactions representative of runtime usage. (b) Model reduction on observed dynamic deformations yields a lowrank approximation to the system’s parameterized impulse response functions. (c) Deformed state geometries are then sampled and used to pre ..."
Abstract

Cited by 90 (8 self)
 Add to MetaCart
dynamics by driving the scene with parameterized interactions representative of runtime usage. (b) Model reduction on observed dynamic deformations yields a lowrank approximation to the system’s parameterized impulse response functions. (c) Deformed state geometries are then sampled and used to precompute and coparameterize a radiance transfer model for deformable objects. (d) The final simulation responds plausibly to interactions similar to those precomputed, includes complex collision and global illumination effects, and runs in real time. We present an approach for precomputing datadriven models of interactive physically based deformable scenes. The method permits realtime hardware synthesis of nonlinear deformation dynamics, including selfcontact and global illumination effects, and supports realtime user interaction. We use datadriven tabulation of the system’s deterministic state space dynamics, and model reduction to build efficient lowrank parameterizations of the deformed shapes. To support runtime interaction, we also tabulate impulse response functions for a palette of external excitations. Although our approach simulates particular systems under very particular interaction conditions, it has several advantages. First, parameterizing all possible scene deformations enables us to precompute novel reduced coparameterizations of global scene illumination for lowfrequency lighting conditions. Second, because the deformation dynamics are precomputed and parameterized as a whole, collisions are resolved within the scene during precomputation so that runtime selfcollision handling is implicit. Optionally, the datadriven models can be synthesized on programmable graphics hardware, leaving only the lowdimensional state space dynamics and appearance data models to be computed by the main CPU.