Results 1  10
of
448
Reflectance and texture of realworld surfaces
 ACM TRANS. GRAPHICS
, 1999
"... In this work, we investigate the visual appearance of realworld surfaces and the dependence of appearance on scale, viewing direction and illumination direction. At ne scale, surface variations cause local intensity variation or image texture. The appearance of this texture depends on both illumina ..."
Abstract

Cited by 586 (23 self)
 Add to MetaCart
In this work, we investigate the visual appearance of realworld surfaces and the dependence of appearance on scale, viewing direction and illumination direction. At ne scale, surface variations cause local intensity variation or image texture. The appearance of this texture depends on both illumination and viewing direction and can be characterized by the BTF (bidirectional texture function). At su ciently coarse scale, local image texture is not resolvable and local image intensity is uniform. The dependence of this image intensity on illumination and viewing direction is described by the BRDF (bidirectional re ectance distribution function). We simultaneously measure the BTF and BRDF of over 60 di erent rough surfaces, each observed with over 200 di erent combinations of viewing and illumination direction. The resulting BTF database is comprised of over 12,000 image textures. To enable convenient use of the BRDF measurements, we t the measurements to two recent models and obtain a BRDF parameter database. These parameters can be used directly in image analysis and synthesis of a wide variety of surfaces. The BTF, BRDF, and BRDF parameter databases have important implications for computer vision and computer graphics and and each is made publicly available.
A theory of shape by space carving
 In Proceedings of the 7th IEEE International Conference on Computer Vision (ICCV99), volume I, pages 307– 314, Los Alamitos, CA
, 1999
"... In this paper we consider the problem of computing the 3D shape of an unknown, arbitrarilyshaped scene from multiple photographs taken at known but arbitrarilydistributed viewpoints. By studying the equivalence class of all 3D shapes that reproduce the input photographs, we prove the existence of a ..."
Abstract

Cited by 574 (14 self)
 Add to MetaCart
(Show Context)
In this paper we consider the problem of computing the 3D shape of an unknown, arbitrarilyshaped scene from multiple photographs taken at known but arbitrarilydistributed viewpoints. By studying the equivalence class of all 3D shapes that reproduce the input photographs, we prove the existence of a special member of this class, the photo hull, that (1) can be computed directly from photographs of the scene, and (2) subsumes all other members of this class. We then give a provablycorrect algorithm, called Space Carving, for computing this shape and present experimental results on complex realworld scenes. The approach is designed to (1) build photorealistic shapes that accurately model scene appearance from a wide range of viewpoints, and (2) account for the complex interactions between occlusion, parallax, shading, and their effects on arbitrary views of a 3D scene. 1.
Modeling the Interaction of Light Between Diffuse Surfaces
, 1984
"... A method is described which models the interaction of light between diffusely reflecting surfaces. Current light reflection models used in computer graphics do not account for the objecttoobject reflection between diffuse surfaces, and thus incorrectly compute the global illumination effects. The ..."
Abstract

Cited by 398 (6 self)
 Add to MetaCart
A method is described which models the interaction of light between diffusely reflecting surfaces. Current light reflection models used in computer graphics do not account for the objecttoobject reflection between diffuse surfaces, and thus incorrectly compute the global illumination effects. The new procedure, based on methods used in thermal engineering, includes the effects of diffuse light sources of finite area, as well as the "colorbleeding" effects which are caused by the diffuse reflections. A simple environment is used to illustrate these simulated effects and is presented with photographs of a physical model. The procedure is applicable to environments composed of ideal diffuse reflectors and can account for direct illumination from a variety of light sources. The resultant surface intensities are independent of observer position, and thus environments can be preprocessed for dynamic sequences.
What is the Set of Images of an Object Under All Possible Lighting Conditions
 IEEE CVPR
, 1996
"... The appearance of a particular object depends on both the viewpoint from which it is observed and the light sources by which it is illuminated. If the appearance of two objects is never identical for any pose or lighting conditions, then in theory the objects can always be distinguished or recogni ..."
Abstract

Cited by 389 (26 self)
 Add to MetaCart
(Show Context)
The appearance of a particular object depends on both the viewpoint from which it is observed and the light sources by which it is illuminated. If the appearance of two objects is never identical for any pose or lighting conditions, then in theory the objects can always be distinguished or recognized. The question arises: What is the set of images of an object under all lighting conditions and pose? In this paper, ive consider only the set of images of an object under variable allumination (including multiple, extended light sources and attached shadows). We prove that the set of npixel images of a convex object with a Lambertian reflectance function, illuminated by an arbitrary number of point light sources at infinity, forms a convex polyhedral cone in IR " and that the dimension of this illumination cone equals the number of distinct surface normals. Furthermore, we show that the cone for a particular object can be constructed from three properly chosen images. Finally, we prove that the set of npixel images of an object of any shape and with an arbitrary reflectance function, seen under all possible illumination conditions, still forms a convex cone in Rn. Th.ese results immediately suggest certain approaches to object recognition. Throughout this paper, we ofler results demonstrating the empirical validity of the illumination cone representation. 1
Shape from Shading: A Survey
 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
, 1999
"... ... this paper, six wellknown SFS algorithms are implemented and compared. The performance of the algorithms was analyzed on synthetic images using mean and standard deviation of depth (Z) error, mean of surface gradient (p, q) error, and CPU timing. Each algorithm works well for certain images, ..."
Abstract

Cited by 306 (1 self)
 Add to MetaCart
... this paper, six wellknown SFS algorithms are implemented and compared. The performance of the algorithms was analyzed on synthetic images using mean and standard deviation of depth (Z) error, mean of surface gradient (p, q) error, and CPU timing. Each algorithm works well for certain images, but performs poorly for others. In general, minimization approaches are more robust, while the other approaches are faster. The implementation of these algorithms in C and images used in this paper are available by anonymous ftp under the pub/tech_paper/survey directory at eustis.cs.ucf.edu (132.170.108.42). These are also part of the electronic version of paper.
NonLinear Approximation of Reflectance Functions
, 1997
"... We introduce a new class of primitive functions with nonlinear parameters for representing light reflectance functions. The functions are reciprocal, energyconserving and expressive. They can capture important phenomena such as offspecular reflection, increasing reflectance and retroreflection. ..."
Abstract

Cited by 270 (10 self)
 Add to MetaCart
We introduce a new class of primitive functions with nonlinear parameters for representing light reflectance functions. The functions are reciprocal, energyconserving and expressive. They can capture important phenomena such as offspecular reflection, increasing reflectance and retroreflection. We demonstrate this by fitting sums of primitive functions to a physicallybased model and to actual measurements. The resulting representation is simple, compact and uniform. It can be applied efficiently in analytical and Monte Carlo computations. CR Categories: I.3.7 [Computer Graphics]: ThreeDimensional Graphics and Realism; I.3.3 [Computer Graphics]: Picture/Image Generation Keywords: Reflectance function, BRDF representation 1 INTRODUCTION The bidirectional reflectance distribution function (BRDF) of a material describes how light is scattered at its surface. It determines the appearance of objects in a scene, through direct illumination and global interreflection effects. Local r...
A SignalProcessing Framework for Inverse Rendering
 In SIGGRAPH 01
, 2001
"... Realism in computergenerated images requires accurate input models for lighting, textures and BRDFs. One of the best ways of obtaining highquality data is through measurements of scene attributes from real photographs by inverse rendering. However, inverse rendering methods have been largely limit ..."
Abstract

Cited by 250 (21 self)
 Add to MetaCart
Realism in computergenerated images requires accurate input models for lighting, textures and BRDFs. One of the best ways of obtaining highquality data is through measurements of scene attributes from real photographs by inverse rendering. However, inverse rendering methods have been largely limited to settings with highly controlled lighting. One of the reasons for this is the lack of a coherent mathematical framework for inverse rendering under general illumination conditions. Our main contribution is the introduction of a signalprocessing framework which describes the reflected light field as a convolution of the lighting and BRDF, and expresses it mathematically as a product of spherical harmonic coefficients of the BRDF and the lighting. Inverse rendering can then be viewed as deconvolution. We apply this theory to a variety of problems in inverse rendering, explaining a number of previous empirical results. We will show why certain problems are illposed or numerically illconditioned, and why other problems are more amenable to solution. The theory developed here also leads to new practical representations and algorithms. For instance, we present a method to factor the lighting and BRDF from a small number of views, i.e. to estimate both simultaneously when neither is known.
Reflection from Layered Surfaces due to Subsurface Scattering
, 1993
"... The reflection of light from most materials consists of two major terms: the specular and the diffuse. Specular reflection may be modeled from first principles by considering a rough surface consisting of perfect reflectors, or microfacets. Diffuse reflection is generally considered to result from ..."
Abstract

Cited by 233 (4 self)
 Add to MetaCart
The reflection of light from most materials consists of two major terms: the specular and the diffuse. Specular reflection may be modeled from first principles by considering a rough surface consisting of perfect reflectors, or microfacets. Diffuse reflection is generally considered to result from multiple scattering either from a rough surface or from within a layer near the surface. Accounting for diffuse reflection by Lambert's Cosine Law, as is universally done in computer graphics, is not a physical theory based on first principles. This paper presents
Object Shape and Reflectance Modeling from Observation
, 1997
"... An object model for computer graphics applications should contain two aspects of information: shape and reflectance properties of the object. A number of techniques have been developed for modeling object shapes by observing real objects. In contrast, attempts to model reflectance properties of real ..."
Abstract

Cited by 223 (17 self)
 Add to MetaCart
(Show Context)
An object model for computer graphics applications should contain two aspects of information: shape and reflectance properties of the object. A number of techniques have been developed for modeling object shapes by observing real objects. In contrast, attempts to model reflectance properties of real objects have been rather limited. In most cases, modeled reflectance properties are too simple or too complicated to be used for synthesizing realistic images of the object. In this paper, we propose a new method for modeling object reflectance properties, as well as object shapes, by observing real objects. First, an object surface shape is reconstructed by merging multiple range images of the object. By using the reconstructed object shape and a sequence of color images of the object, parameters of a reflection model are estimated in a robust manner. The key point of the proposed method is that, first, the diffuse and specular reflection components are separated from the color image sequence, and then, reflectance parameters of each reflection component are estimated separately. This approach enables estimation of reflectance properties of real objects whose surfaces show specularity as well as diffusely reflected lights. The recovered object shape and reflectance properties are then used for synthesizing object images with realistic shading effects under arbitrary illumination conditions.