Results 1  10
of
36
TensorTextures: Multilinear ImageBased Rendering
 ACM TRANSACTIONS ON GRAPHICS
, 2004
"... This paper introduces a tensor framework for imagebased rendering. In particular, we develop an algorithm called TensorTextures that learns a parsimonious model of the bidirectional texture function (BTF) from observational data. Given an ensemble of images of a textured surface, our nonlinear, gen ..."
Abstract

Cited by 62 (0 self)
 Add to MetaCart
This paper introduces a tensor framework for imagebased rendering. In particular, we develop an algorithm called TensorTextures that learns a parsimonious model of the bidirectional texture function (BTF) from observational data. Given an ensemble of images of a textured surface, our nonlinear, generative model explicitly represents the multifactor interaction implicit in the detailed appearance of the surface under varying photometric angles, including local (pertexel) reflectance, complex mesostructural selfocclusion, interreflection and selfshadowing, and other BTFrelevant phenomena. Mathematically, TensorTextures is based on multilinear algebra, the algebra of higherorder tensors, hence its name. It is computed through a decomposition known as the Nmode SVD, an extension to tensors of the conventional matrix singular value decomposition (SVD). We demonstrate the application of TensorTextures to the imagebased rendering of natural and synthetic textured surfaces under continuously varying viewpoint and illumination conditions.
Rendering Forest Scenes in RealTime
, 2004
"... Forests are crucial for scene realism in applications such as flight simulators. This paper proposes a new representation allowing for the realtime rendering of realistic forests covering an arbitrary terrain. It lets us produce dense forests corresponding to continuous nonrepetitive fields made o ..."
Abstract

Cited by 50 (5 self)
 Add to MetaCart
(Show Context)
Forests are crucial for scene realism in applications such as flight simulators. This paper proposes a new representation allowing for the realtime rendering of realistic forests covering an arbitrary terrain. It lets us produce dense forests corresponding to continuous nonrepetitive fields made of thousands of trees with full parallax. Our representation draws on volumetric textures and aperiodic tiling: the forest consists of a set of edgecompatible prisms containing forest samples which are aperiodically mapped onto the ground. The representation allows for quality rendering, thanks to appropriate 3D nonlinear filtering. It relies on LODs and on a GPUfriendly structure to achieve realtime performance. Dynamic lighting and shadowing are beyond the scope of this paper. On the other hand, we require no advanced graphics feature except 3D textures and decent fill and vertex transform rates. However we can take advantage of vertex shaders so that the slicing of the volumetric texture is entirely done on the GPU. Keywords: realtime rendering, natural scenes, 3D textures, aperiodic tiling, volumetric rendering, slicing, texcells.
Bidirectional Texture Function Modeling: A State of the Art Survey
 Pattern Analysis and Machine Intelligence, IEEE Transactions on
, 2009
"... Abstract—An evergrowing number of realworld computer vision applications require classification, segmentation, retrieval, or realistic rendering of genuine materials. However, the appearance of real materials dramatically changes with illumination and viewing variations. Thus, the only reliable re ..."
Abstract

Cited by 30 (5 self)
 Add to MetaCart
(Show Context)
Abstract—An evergrowing number of realworld computer vision applications require classification, segmentation, retrieval, or realistic rendering of genuine materials. However, the appearance of real materials dramatically changes with illumination and viewing variations. Thus, the only reliable representation of material visual properties requires capturing of its reflectance in as wide range of light and camera position combinations as possible. This is a principle of the recent most advanced texture representation, the Bidirectional Texture Function (BTF). Multispectral BTF is a sevendimensional function that depends on view and illumination directions as well as on planar texture coordinates. BTF is typically obtained by measurement of thousands of images covering many combinations of illumination and viewing angles. However, the large size of such measurements has prohibited their practical exploitation in any sensible application until recently. During the last few years, the first BTF measurement, compression, modeling, and rendering methods have emerged. In this paper, we categorize, critically survey, and psychophysically compare such approaches, which were published in this newly arising and important computer vision and graphics area. Index Terms—BTF, BRDF, 3D texture, surface texture, texture measurement, texture analysis, texture synthesis, texture modeling, data compression, psychophysical study, light transport. Ç 1
Displacement Mapping on the GPU — State of the Art
"... This paper reviews the latest developments of displacement mapping algorithms implemented on the vertex, geometry, and fragment shaders of graphics cards. Displacement mapping algorithms are classified as pervertex and perpixel methods. Perpixel approaches are further categorized as safe algorith ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
This paper reviews the latest developments of displacement mapping algorithms implemented on the vertex, geometry, and fragment shaders of graphics cards. Displacement mapping algorithms are classified as pervertex and perpixel methods. Perpixel approaches are further categorized as safe algorithms that aim at correct solutions in all cases, to unsafe techniques that may fail in extreme cases but are usually much faster than safe algorithms, and to combined methods that exploit the robustness of safe and the speed of unsafe techniques. We discuss the possible roles of vertex, geometry, and fragment shaders to implement these algorithms. Then the particular GPU based bump, parallax, relief, sphere, horizon mapping, cone stepping, local ray tracing, pyramidal and viewdependent displacement mapping methods, as well as their numerous variations are reviewed providing also implementation details of the shader programs. We present these methods using uniform notations and also point out when different authors called similar concepts differently. In addition to basic displacement mapping, selfshadowing and silhouette processing are also reviewed. Based on our experiences gained having reimplemented these methods, their performance and quality are compared, and the advantages and disadvantages are fairly presented.
A survey of nonlinear prefiltering methods for efficient and accurate surface shading
 IEEE Trans. Vis. Comput. Graph
, 2012
"... Abstract—Rendering a complex surface accurately and without aliasing requires the evaluation of an integral for each pixel, namely a weighted average of the outgoing radiance over the pixel footprint on the surface. The outgoing radiance is itself given by a local illumination equation as a function ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
(Show Context)
Abstract—Rendering a complex surface accurately and without aliasing requires the evaluation of an integral for each pixel, namely a weighted average of the outgoing radiance over the pixel footprint on the surface. The outgoing radiance is itself given by a local illumination equation as a function of the incident radiance and of the surface properties. Computing all this numerically during rendering can be extremely costly. For efficiency, especially for realtime rendering, it is necessary to use precomputations. When the fine scale surface geometry, reflectance and illumination properties are specified with maps on a coarse mesh (such as color maps, normal maps, horizon maps or shadow maps), a frequently used simple idea is to prefilter each map linearly and separately. The averaged outgoing radiance, i.e., the average of the values given by the local illumination equation is then estimated by applying this equation to the averaged surface parameters. But this is really not accurate because this equation is nonlinear, due to selfocclusions, selfshadowing, nonlinear reflectance functions, etc. Some methods use more complex prefiltering algorithms to cope with these nonlinear effects. This paper is a survey of these methods. We start with a general presentation of the problem of prefiltering complex surfaces. We then present and classify the existing methods according to the approximations they make to tackle this difficult problem. Finally, an analysis of these methods allows us to highlight some generic tools to prefilter maps used in nonlinear functions, and to identify open issues to address the general problem. Index Terms—Rendering, antialiasing, prefiltering, reflectance, surface F
Allfrequency relighting of glossy objects
 ACM TRANSACTIONS ON GRAPHICS
, 2006
"... We present a technique for interactive rendering of glossy objects in complex and dynamic lighting environments that captures interreflections and allfrequency shadows. Our system is based on precomputed radiance transfer and separable BRDF approximation. We factor glossy BRDFs using a separable de ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
(Show Context)
We present a technique for interactive rendering of glossy objects in complex and dynamic lighting environments that captures interreflections and allfrequency shadows. Our system is based on precomputed radiance transfer and separable BRDF approximation. We factor glossy BRDFs using a separable decomposition and keep only a few loworder approximation terms, each consisting of a purely viewdependent and a purely lightdependent component. In the precomputation step, for every vertex we sample its visibility and compute a direct illumination transport vector corresponding to each BRDF term. We use modern graphics hardware to accelerate this step, and further compress the data using a nonlinear wavelet approximation. The direct illumination pass is followed by one or more interreflection passes, each of which gathers compressed transport vectors from the previous pass to produce global illumination transport vectors. To render at run time, we dynamically sample the lighting to produce a light vector, also represented in a wavelet basis. We compute the inner product of the light vector with the precomputed transport vectors, and the results are further combined with the BRDF viewdependent components to produce vertex colors. We describe acceleration of the rendering algorithm using programmable graphics hardware, and discuss the limitations and tradeoffs imposed by the hardware.
Interactive editing and modeling of bidirectional texture functions
 IN PROCEEDINGS OF SIGGRAPH
, 2007
"... While measured Bidirectional Texture Functions (BTF) enable impressive realism in material appearance, they offer little control, which limits their use for content creation. In this work, we interactively manipulate BTFs and create new BTFs from flat textures. We present an outofcore approach to ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
While measured Bidirectional Texture Functions (BTF) enable impressive realism in material appearance, they offer little control, which limits their use for content creation. In this work, we interactively manipulate BTFs and create new BTFs from flat textures. We present an outofcore approach to manage the size of BTFs and introduce new editing operations that modify the appearance of a material. These tools achieve their full potential when selectively applied to subsets of the BTF through the use of new selection operators. We further analyze the use of our editing operators for the modification of important visual characteristics such as highlights, roughness, and fuzziness. Results compare favorably to the direct alteration of microgeometry and reflectances of synthetic reference data.
A Survey on Computer Representations of Trees for Realistic and Efficient Rendering
, 2006
"... This paper gives an overview of computer graphics representations of trees commonly used for the rendering of complex scene of vegetation. Looking for the right compromise between realism and efficiency has lead researchers to consider various types of geometrical plant models with different types o ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
(Show Context)
This paper gives an overview of computer graphics representations of trees commonly used for the rendering of complex scene of vegetation. Looking for the right compromise between realism and efficiency has lead researchers to consider various types of geometrical plant models with different types of complexity. To achieve realist plant model, a complex structure of plant with full details is generally considered. In contrast, to promote efficiency, other approaches summarize plant geometry with few primitives allowing rapid rendering. Finally, to find a good compromise, structures with adaptive complexity are defined. Theses different types of representations and the ways to use them are presented, classified and discussed. The proposed classification principles rely on the type of structural details used in the plants representations. Characterization of all these methods is completed with various additional criteria including rendering primitive type, distance validity, interactive possibilities, animation ability and lighting properties.
Wavelet Encoding of BRDFs for RealTime Rendering
 In Graphics Interface 07
, 2007
"... Figure 1: An illuminated fabric with an acquired anisotropic velvet (left), isotropic wood (middle) and shiny plastic (right) BRDF rendered at 40 FPS (512 × 512) using our wavelet encoding. The initial data sets containing 32 4 RGB samples (12MB) are compressed into 3D textures of 700KB. Our approac ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
Figure 1: An illuminated fabric with an acquired anisotropic velvet (left), isotropic wood (middle) and shiny plastic (right) BRDF rendered at 40 FPS (512 × 512) using our wavelet encoding. The initial data sets containing 32 4 RGB samples (12MB) are compressed into 3D textures of 700KB. Our approach can be combined with classical texture, environment and bump mapping in order to produce high quality local illumination. Acquired data often provides the best knowledge of a material’s bidirectional reflectance distribution function (BRDF). Its integration into most realtime rendering systems requires both data compression and the implementation of the decompression and filtering stages on contemporary graphics processing units (GPUs). This paper improves the quality of realtime perpixel lighting on GPUs using a wavelet decomposition of acquired BRDFs. Threedimensional texture mapping with indexing allows us to efficiently compress the BRDF data by exploiting much of the coherency between hemispherical data. We apply builtin hardware filtering and pixel shader flexibility to perform filtering in the full 4D BRDF domain. Antialiasing of specular highlights is performed via a progressive levelofdetail technique built upon the multiresolution of the wavelet encoding. This technique increases rendering performance on distant surfaces while maintaining accurate appearance of close ones. CR Categories: I.3.7 [Computer Graphics]: ThreeDimensional Graphics and Realism—Color, shading, shadowing, and texture
A survey of imagebased relighting techniques
 In GRAPP ’06
, 2006
"... Abstract: Imagebased Relighting (IBRL) has recently attracted a lot of research interest for its ability to relight real objects or scenes, from novel illuminations captured in natural/synthetic environments. Complex lighting effects such as subsurface scattering, interreflection, shadowing, mesost ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
Abstract: Imagebased Relighting (IBRL) has recently attracted a lot of research interest for its ability to relight real objects or scenes, from novel illuminations captured in natural/synthetic environments. Complex lighting effects such as subsurface scattering, interreflection, shadowing, mesostructural selfocclusion, refraction and other relevant phenomena can be generated using IBRL. The main advantage of Imagebased graphics is that the rendering time is independent of scene complexity as the rendering is actually a process of manipulating image pixels, instead of simulating light transport. The goal of this paper is to provide a complete and systematic overview of the research in Imagebased Relighting. We observe that essentially all IBRL techniques can be broadly classified into three categories, based on how the scene/illumination information is captured: Reflectance function based, Basis function based, and Plenoptic function based. We discuss the characteristics of each of these categories and their representative methods. We also discuss about sampling density and types of light source, relevant issues of IBRL. 1