Results 1  10
of
106
UYTTENDAELE M.: Joint bilateral upsampling
 ACM Trans. Graph
, 2007
"... Image analysis and enhancement tasks such as tone mapping, colorization, stereo depth, and photomontage, often require computing a solution (e.g., for exposure, chromaticity, disparity, labels) over the pixel grid. Computational and memory costs often require that a smaller solution be run over a do ..."
Abstract

Cited by 149 (3 self)
 Add to MetaCart
Image analysis and enhancement tasks such as tone mapping, colorization, stereo depth, and photomontage, often require computing a solution (e.g., for exposure, chromaticity, disparity, labels) over the pixel grid. Computational and memory costs often require that a smaller solution be run over a downsampled image. Although general purpose upsampling methods can be used to interpolate the low resolution solution to the full resolution, these methods generally assume a smoothness prior for the interpolation. We demonstrate that in cases, such as those above, the available high resolution input image may be leveraged as a prior in the context of a joint bilateral upsampling procedure to produce a better high resolution solution. We show results for each of the applications above and compare them to traditional upsampling methods.
Apparent ridges for line drawing
 ACM Transactions on Graphics
, 2007
"... Nonphotorealistic line drawing depicts 3D shapes through the rendering of feature lines. A number of characterizations of relevant lines have been proposed but none of these definitions alone seem to capture all visuallyrelevant lines. We introduce a new definition of feature lines based on two pe ..."
Abstract

Cited by 63 (1 self)
 Add to MetaCart
(Show Context)
Nonphotorealistic line drawing depicts 3D shapes through the rendering of feature lines. A number of characterizations of relevant lines have been proposed but none of these definitions alone seem to capture all visuallyrelevant lines. We introduce a new definition of feature lines based on two perceptual observations. First, human perception is sensitive to the variation of shading, and since shape perception is little affected by lighting and reflectance modification, we should focus on normal variation. Second, viewdependent lines better convey the shape of smooth surfaces better than viewindependent lines. From this we define viewdependent curvature as the variation of the surface normal with respect to a viewing screen plane, and apparent ridges as the locus points of the maximum of the viewdependent curvature. We derive the equation for apparent ridges and present a new algorithm to render line drawings of 3D meshes. We show that our apparent ridges encompass or enhance aspects of several other feature lines.
Multidimensional Adaptive Sampling and Reconstruction for Ray Tracing
"... We present a new adaptive sampling strategy for ray tracing. Our technique is specifically designed to handle multidimensional sample domains, and it is well suited for efficiently generating images with effects such as soft shadows, motion blur, and depth of field. These effects are problematic for ..."
Abstract

Cited by 44 (1 self)
 Add to MetaCart
We present a new adaptive sampling strategy for ray tracing. Our technique is specifically designed to handle multidimensional sample domains, and it is well suited for efficiently generating images with effects such as soft shadows, motion blur, and depth of field. These effects are problematic for existing image based adaptive sampling techniques as they operate on pixels, which are possibly noisy results of a Monte Carlo ray tracing process. Our sampling technique operates on samples in the multidimensional space given by the rendering equation and as a consequence the value of each sample is noisefree. Our algorithm consists of two passes. In the first pass we adaptively generate samples in the multidimensional space, focusing on regions where the local contrast between samples is high. In the second pass we reconstruct the image by integrating the multidimensional function along all but the image dimensions. We perform a high quality anisotropic reconstruction by determining the extent of each sample in the multidimensional space using a structure tensor. We demonstrate our method on scenes with a 3 to 5 dimensional space, including soft shadows, motion blur, and depth of field. The results show that our method uses fewer samples than Mittchell’s adaptive sampling technique while producing images with less noise.
The influence of shape on the perception of material reflectance
 ACM Transactions on Graphics
, 2007
"... Figure 1: The tesselated spheres in the left image are rendered with two different types of a blue plastic BRDF, yet they are perceived as made from the same material. The objects in the right image are rendered with an identical blue plastic BRDF, yet their appearance is very different. Visual obse ..."
Abstract

Cited by 44 (2 self)
 Add to MetaCart
Figure 1: The tesselated spheres in the left image are rendered with two different types of a blue plastic BRDF, yet they are perceived as made from the same material. The objects in the right image are rendered with an identical blue plastic BRDF, yet their appearance is very different. Visual observation is our principal source of information in determining the nature of objects, including shape, material or roughness. The physiological and cognitive processes that resolve visual input into an estimate of the material of an object are influenced by the illumination and the shape of the object. This affects our ability to select materials by observing them on a pointlit sphere, as is common in current 3D modeling applications. In this paper we present an exploratory psychophysical experiment to study various influences on material discrimination in a realistic setting. The resulting data set is analyzed using a wide range of statistical techniques. Analysis of variance is used to estimate the magnitude of the influence of geometry, and fitted psychometric functions produce significantly diverse material discrimination thresholds across different shapes and materials. Suggested improvements to traditional material pickers include direct visualization on the target object, environment illumination, and the use of discrimination thresholds as a step size for parameter adjustments.
Fourier Depth of Field
"... Optical systems used in photography and cinema produce depth of field effects, that is, variations of focus with depth. These effects are simulated in image synthesis by integrating incoming radiance at each pixel over the lense aperture. Unfortunately, aperture integration is extremely costly for d ..."
Abstract

Cited by 32 (13 self)
 Add to MetaCart
Optical systems used in photography and cinema produce depth of field effects, that is, variations of focus with depth. These effects are simulated in image synthesis by integrating incoming radiance at each pixel over the lense aperture. Unfortunately, aperture integration is extremely costly for defocused areas where the incoming radiance has high variance, since many samples are then required for a noisefree Monte Carlo integration. On the other hand, using many aperture samples is wasteful in focused areas where the integrand varies little. Similarly, image sampling in defocused areas should be adapted to the very smooth appearance variations due to blurring. This paper introduces an analysis of focusing and depth of field in the frequency domain, allowing a practical characterization of a light field’s frequency content both for image and aperture sampling. Based on this analysis we propose an adaptive depth of field rendering algorithm which optimizes sampling in two important ways. First, image sampling is based on conservative bandwidth prediction and a splatting reconstruction technique ensures correct image reconstruction. Second, at each pixel the variance in the radiance over the aperture is estimated, and used to govern sampling. This technique is easily integrated in any samplingbased renderer, and vastly improves performance.
Temporal Light Field Reconstruction for Rendering Distribution Effects
"... A scene with complex occlusion rendered with depth of field. Left: Images rendered by PBRT [Pharr and Humphreys 2010] using 16 and 256 lowdiscrepancy samples per pixel (spp) and traditional axisaligned filtering. Right: Image reconstructed by our algorithm in 10 seconds from the same 16 samples pe ..."
Abstract

Cited by 32 (4 self)
 Add to MetaCart
A scene with complex occlusion rendered with depth of field. Left: Images rendered by PBRT [Pharr and Humphreys 2010] using 16 and 256 lowdiscrepancy samples per pixel (spp) and traditional axisaligned filtering. Right: Image reconstructed by our algorithm in 10 seconds from the same 16 samples per pixel. We obtain defocus quality similar to the 256 spp result in approximately 1/16th of the time. Traditionally, effects that require evaluating multidimensional integrals for each pixel, such as motion blur, depth of field, and soft shadows, suffer from noise due to the variance of the highdimensional integrand. In this paper, we describe a general reconstruction technique that exploits the anisotropy in the temporal light field and permits efficient reuse of samples between pixels, multiplying the effective sampling rate by a large factor. We show that our technique can be applied in situations that are challenging or impossible for previous anisotropic reconstruction methods, and that it can yield good results with very sparse inputs. We demonstrate our method for simultaneous motion blur, depth of field, and soft shadows. Keywords: Links: DL PDF
Reflectance Sharing: Predicting Appearance from a Sparse Set of Images of a Known Shape
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2006
"... ..."
(Show Context)
Shield fields: modeling and capturing 3D occluders
 ACM TRANS. GRAPH
"... We describe a unified representation of occluders in light transport and photography using shield fields: the 4D attenuation function which acts on any light field incident on an occluder. Our key theoretical result is that shield fields can be used to decouple the effects of occluders and incident ..."
Abstract

Cited by 29 (9 self)
 Add to MetaCart
We describe a unified representation of occluders in light transport and photography using shield fields: the 4D attenuation function which acts on any light field incident on an occluder. Our key theoretical result is that shield fields can be used to decouple the effects of occluders and incident illumination. We first describe the properties of shield fields in the frequencydomain and briefly analyze the “forward ” problem of efficiently computing cast shadows. Afterwards, we apply the shield field signalprocessing framework to make several new observations regarding the “inverse” problem of reconstructing 3D occluders from cast shadows – extending previous work on shapefromsilhouette and visual hull methods. From this analysis we develop the first singlecamera, singleshot approach to capture visual hulls without requiring moving or programmable illumination. We analyze several competing camera designs, ultimately leading to the development of a new largeformat, maskbased light field camera that exploits optimal tiledbroadband codes for lightefficient shield field capture. We conclude by presenting a detailed experimental analysis of shield field capture and 3D occluder reconstruction.
A Precomputed Polynomial Representation for Interactive BRDF Editing with Global Illumination
 CONDITIONALLY ACCEPTED TO ACM TRANSACTIONS ON GRAPHICS (20072008)
"... The ability to interactively edit BRDFs in their final placement within a computer graphics scene is vital to making informed choices for material properties. We significantly extend previous work on BRDF editing for static scenes (with fixed lighting and view), by developing a precomputed polynomia ..."
Abstract

Cited by 25 (2 self)
 Add to MetaCart
The ability to interactively edit BRDFs in their final placement within a computer graphics scene is vital to making informed choices for material properties. We significantly extend previous work on BRDF editing for static scenes (with fixed lighting and view), by developing a precomputed polynomial representation that enables interactive BRDF editing with global illumination. Unlike previous precomputationbased rendering techniques, the image is not linear in the BRDF when considering interreflections. We introduce a framework for precomputing a multibounce tensor of polynomial coefficients, that encapsulates the nonlinear nature of the task. Significant reductions in complexity are achieved by leveraging the lowfrequency nature of indirect light. We use a highquality representation for the BRDFs at the first bounce from the eye, and lowerfrequency (often diffuse) versions for further bounces. This approximation correctly captures the general global illumination in a scene, including colorbleeding, nearfield object reflections, and even caustics. We adapt Monte Carlo path tracing for precomputing the tensor of coefficients for BRDF basis functions. At runtime, the highdimensional tensors can be reduced to a simple dot product at each pixel for rendering. We present a number of examples of editing BRDFs in complex scenes, with interactive feedback rendered with global illumination.