Results 1 - 10
of
107
Mesostructure from specularity
- In CVPR ’06: Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition
, 2006
"... We describe a simple and robust method for surface mesostructure acquisition. Our method builds on the observation that specular reflection is a reliable visual cue for surface mesostructure perception. In contrast to most photometric stereo methods, which take specularities as outliers and discard ..."
Abstract
-
Cited by 46 (4 self)
- Add to MetaCart
We describe a simple and robust method for surface mesostructure acquisition. Our method builds on the observation that specular reflection is a reliable visual cue for surface mesostructure perception. In contrast to most photometric stereo methods, which take specularities as outliers and discard them, we propose a progressive acquisition system that captures a dense specularity field as the only information for mesostructure reconstruction. Our method can efficiently recover surfaces with fine-scale geometric details from complex real-world objects with a wide variety of reflection properties, including translucent, low albedo, and highly specular objects. We show results for a variety of objects including human skin, dried apricot, orange, jelly candy, black leather and dark chocolate.
Multiplexing for optimal lighting
- IEEE Trans. PAMI
, 2007
"... Abstract—Imaging of objects under variable lighting directions is an important and frequent practice in computer vision, machine vision, and image-based rendering. Methods for such imaging have traditionally used only a single light source per acquired image. They may result in images that are too d ..."
Abstract
-
Cited by 45 (9 self)
- Add to MetaCart
Abstract—Imaging of objects under variable lighting directions is an important and frequent practice in computer vision, machine vision, and image-based rendering. Methods for such imaging have traditionally used only a single light source per acquired image. They may result in images that are too dark and noisy, e.g., due to the need to avoid saturation of highlights. We introduce an approach that can significantly improve the quality of such images, in which multiple light sources illuminate the object simultaneously from different directions. These illumination-multiplexed frames are then computationally demultiplexed. The approach is useful for imaging dim objects, as well as objects having a specular reflection component. We give the optimal scheme by which lighting should be multiplexed to obtain the highest quality output, for signal-independent noise. The scheme is based on Hadamard codes. The consequences of imperfections such as stray light, saturation, and noisy illumination sources are then studied. In addition, the paper analyzes the implications of shot noise, which is signal-dependent, to Hadamard multiplexing. The approach facilitates practical lighting setups having high directional resolution. This is shown by a setup we devise, which is flexible, scalable, and programmable. We used it to demonstrate the benefit of multiplexing in experiments. Index Terms—Physics-based vision, image-based rendering, multiplexed illumination, Hadamard codes, photon noise. 1
Inverse shade trees for non-parametric material representation and editing
- ACM Trans. Graph
, 2006
"... Recent progress in the measurement of surface reflectance has created a demand for non-parametric appearance representations that are accurate, compact, and easy to use for rendering. Another crucial goal, which has so far received little attention, is editability: for practical use, we must be able ..."
Abstract
-
Cited by 44 (13 self)
- Add to MetaCart
Recent progress in the measurement of surface reflectance has created a demand for non-parametric appearance representations that are accurate, compact, and easy to use for rendering. Another crucial goal, which has so far received little attention, is editability: for practical use, we must be able to change both the directional and spatial behavior of surface reflectance (e.g., making one material shinier, another more anisotropic, and changing the spatial “texture maps ” indicating where each material appears). We introduce an Inverse Shade Tree framework that provides a general approach to estimating the “leaves ” of a user-specified shade tree from highdimensional measured datasets of appearance. These leaves are sampled 1- and 2-dimensional functions that capture both the directional behavior of individual materials and their spatial mixing patterns. In order to compute these shade trees automatically, we map the problem to matrix factorization and introduce a flexible new algorithm that allows for constraints such as non-negativity, sparsity, and energy conservation. Although we cannot infer every type of shade tree, we demonstrate the ability to reduce multigigabyte measured datasets of the Spatially-Varying Bidirectional Reflectance Distribution Function (SVBRDF) into a compact representation that may be edited in real time.
Seing people in different light: Joint shape, motion and reflectance capture
- IEEE TVCG
"... Abstract—By means of passive optical motion capture, real people can be authentically animated and photo-realistically textured. To import real-world characters into virtual environments, however, surface reflectance properties must also be known. We describe a video-based modeling approach that cap ..."
Abstract
-
Cited by 37 (8 self)
- Add to MetaCart
(Show Context)
Abstract—By means of passive optical motion capture, real people can be authentically animated and photo-realistically textured. To import real-world characters into virtual environments, however, surface reflectance properties must also be known. We describe a video-based modeling approach that captures human shape and motion as well as reflectance characteristics from a handful of synchronized video recordings. The presented method is able to recover spatially varying surface reflectance properties of clothes from multiview video footage. The resulting model description enables us to realistically reproduce the appearance of animated virtual actors under different lighting conditions, as well as to interchange surface attributes among different people, e.g., for virtual dressing. Our contribution can be used to create 3D renditions of real-world people under arbitrary novel lighting conditions on standard graphics hardware. Index Terms—3D video, dynamic reflectometry, real-time rendering, relighting. Ç 1
Illumination Multiplexing within Fundamental Limits
"... Taking a sequence of photographs using multiple illumination sources or settings is central to many computer vision and graphics problems. A growing number of recent methods use multiple sources rather than single point sources in each frame of the sequence. Potential benefits include increased sign ..."
Abstract
-
Cited by 30 (4 self)
- Add to MetaCart
(Show Context)
Taking a sequence of photographs using multiple illumination sources or settings is central to many computer vision and graphics problems. A growing number of recent methods use multiple sources rather than single point sources in each frame of the sequence. Potential benefits include increased signal-to-noise ratio and accommodation of scene dynamic range. However, existing multiplexing schemes, including Hadamard-based codes, are inhibited by fundamental limits set by Poisson distributed photon noise and by sensor saturation. The prior schemes may actually be counterproductive due to these effects. We derive multiplexing codes that are optimal under these fundamental effects. Thus, the novel codes generalize the prior schemes and have a much broader applicability. Our approach is based on formulating the problem as a constrained optimization. We further suggest an algorithm to solve this optimization problem. The superiority and effectiveness of the method is demonstrated in experiments involving object illumination.
A Photometric Approach for Estimating Normals and Tangents
"... This paper presents a technique for acquiring the shape of realworld objects with complex isotropic and anisotropic reflectance. Our method estimates the local normal and tangent vectors at each pixel in a reference view from a sequence of images taken under varying point lighting. We show that for ..."
Abstract
-
Cited by 24 (3 self)
- Add to MetaCart
This paper presents a technique for acquiring the shape of realworld objects with complex isotropic and anisotropic reflectance. Our method estimates the local normal and tangent vectors at each pixel in a reference view from a sequence of images taken under varying point lighting. We show that for many real-world materials and a restricted set of light positions, the 2D slice of the BRDF obtained by fixing the local view direction is symmetric under reflections of the halfway vector across the normal-tangent and normal-binormal planes. Based on this analysis, we develop an optimization that estimates the local surface frame by identifying these planes of symmetry in the measured BRDF. As with other photometric methods, a key benefit of our approach is that the input is easy to acquire and is less sensitive to calibration errors than stereo or multi-view techniques. Unlike prior work, our approach allows estimating the surface tangent in the case of anisotropic reflectance. We confirm the accuracy and reliability of our approach with analytic and measured data, present several normal and tangent fields acquired with our technique, and demonstrate applications to appearance editing.
Robust fusion of dynamic shape and normal capture for high-quality reconstruction of time-varying geometry
- IN PROC. IEEE CONF. ON COMPUTER VISION AND PATTERN RECOGNITION
, 2008
"... This paper describes a new passive approach to capture time-varying scene geometry in large acquisition volumes from multi-view video. It can be applied to reconstruct complete moving models of human actors that feature even slightest dynamic geometry detail, such as wrinkles and folds in clothing, ..."
Abstract
-
Cited by 23 (1 self)
- Add to MetaCart
(Show Context)
This paper describes a new passive approach to capture time-varying scene geometry in large acquisition volumes from multi-view video. It can be applied to reconstruct complete moving models of human actors that feature even slightest dynamic geometry detail, such as wrinkles and folds in clothing, and that can be viewed from 360 ◦. Starting from multi-view video streams recorded under calibrated lighting, we first perform marker-less human motion capture based on a smooth template with no highfrequency surface detail. Subsequently, surface reflectance and time-varying normal fields are estimated based on the coarse template shape. The main contribution of this paper is a new statistical approach to solve the non-trivial problem of transforming the captured normal field that is defined over the smooth non-planar 3D template into true 3D displacements. Our spatio-temporal reconstruction method outputs displaced geometry that is accurate at each time step of video and temporally smooth, even if the input data are affected by noise.
State of the Art in Transparent and Specular Object Reconstruction
- EUROGRAPHICS 2008 STAR – STATE OF THE ART REPORT
, 2008
"... This state of the art report covers reconstruction methods for transparent and specular objects or phenomena. While the 3D acquisition of opaque surfaces with lambertian reflectance is a well-studied problem, transparent, refractive, specular and potentially dynamic scenes pose challenging problems ..."
Abstract
-
Cited by 23 (1 self)
- Add to MetaCart
This state of the art report covers reconstruction methods for transparent and specular objects or phenomena. While the 3D acquisition of opaque surfaces with lambertian reflectance is a well-studied problem, transparent, refractive, specular and potentially dynamic scenes pose challenging problems for acquisition systems. This report reviews and categorizes the literature in this field. Despite tremendous interest in object digitization, the acquisition of digital models of transparent or specular objects is far from being a solved problem. On the other hand, real-world data is in high demand for applications such as object modeling, preservation of historic artifacts and as input to data driven modeling techniques. With this report we aim at providing a reference for and an introduction to the field of transparent and specular object reconstruction. We describe acquisition approaches for different classes of objects. Transparent objects/phenomena that do not change the straight ray geometry can be found foremost in natural phenomena. Refraction effects are usually small and can be considered negligible for these objects. Phenomena as diverse as fire, smoke, and interstellar nebulae can be modeled using a straight ray model of image formation. Refractive and specular surfaces on the other hand change the straight rays into usually piecewise linear ray paths, adding additional complexity to the reconstruction problem. Translucent objects exhibit significant sub-surface scattering effects rendering traditional acquisition approaches unstable. Different classes of techniques have been developed to deal with these problems and good reconstruction results can be achieved with current state-of-the-art techniques. However, the approaches are still specialized and targeted at very specific object classes. We classify the existing literature and hope to provide an entry point to this exiting field.
Principles of Appearance Acquisition and Representation
- SIGGRAPH 2008 CLASS NOTES
, 2008
"... Algorithms for scene understanding and realistic image synthesis require accurate models of the way real-world materials scatter light. This class describes recent work in the graphics community to measure the spatially- and directionally-varying reflectance and subsurface scattering of complex mate ..."
Abstract
-
Cited by 22 (2 self)
- Add to MetaCart
Algorithms for scene understanding and realistic image synthesis require accurate models of the way real-world materials scatter light. This class describes recent work in the graphics community to measure the spatially- and directionally-varying reflectance and subsurface scattering of complex materials, and to develop efficient representations and analysis tools for these datasets. We describe the design of acquisition devices and capture strategies for BRDFs and BSSRDFs, efficient factored representations, and a case study of capturing the appearance of human faces.
Photogeometric Structured Light: A Self-Calibrating and Multi-Viewpoint Framework for Accurate 3D Modeling
, 2008
"... Structured-light methods actively generate geometric correspondence data between projectors and cameras in order to facilitate robust 3D reconstruction. In this paper, we present Photogeometric Structured Light whereby a standard structured light method is extended to include photometric methods. Ph ..."
Abstract
-
Cited by 21 (5 self)
- Add to MetaCart
(Show Context)
Structured-light methods actively generate geometric correspondence data between projectors and cameras in order to facilitate robust 3D reconstruction. In this paper, we present Photogeometric Structured Light whereby a standard structured light method is extended to include photometric methods. Photometric processing serves the double purpose of increasing the amount of recovered surface detail and of enabling the structured-light setup to be robustly self-calibrated. Further, our framework uses a photogeometric optimization that supports the simultaneous use of multiple cameras and projectors and yields a single and accurate multi-view 3D model which best complies with photometric and geometric data.