• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

P.: Perceptual illumination components: a new approach to efficient, high quality global illumination rendering (0)

by W A STOKES, J A FERWERDA, B WALTER, D GREENBERG
Venue:ACM Trans. Graph
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 41
Next 10 →

Imperfect shadow maps for efficient computation of indirect illumination

by T. Ritschel, T. Grosch, M. H. Kim, H. -p. Seidel, C. Dachsbacher, J. Kautz - ACM Trans. Graph. (Proc. SIGGRAPH Asia
"... GTX. The scene is illuminated with a small spot light (upper right); all other illumination and shadowing is indirect (one bounce). We present a method for interactive computation of indirect illumination in large and fully dynamic scenes based on approximate visibility queries. While the high-frequ ..."
Abstract - Cited by 65 (15 self) - Add to MetaCart
GTX. The scene is illuminated with a small spot light (upper right); all other illumination and shadowing is indirect (one bounce). We present a method for interactive computation of indirect illumination in large and fully dynamic scenes based on approximate visibility queries. While the high-frequency nature of direct lighting requires accurate visibility, indirect illumination mostly consists of smooth gradations, which tend to mask errors due to incorrect visibility. We exploit this by approximating visibility for indirect illumination with imperfect shadow maps—low-resolution shadow maps rendered from a crude point-based representation of the scene. These are used in conjunction with a global illumination algorithm based on virtual point lights enabling indirect illumination of dynamic scenes at real-time frame rates. We demonstrate that imperfect shadow maps are a valid approximation to visibility, which makes the simulation of global illumination an order of magnitude faster than using accurate visibility.

Fourier Depth of Field

by Cyril Soler, Fredo Durand, Nicolas Holzschuch, Francois Sillion
"... Optical systems used in photography and cinema produce depth of field effects, that is, variations of focus with depth. These effects are simulated in image synthesis by integrating incoming radiance at each pixel over the lense aperture. Unfortunately, aperture integration is extremely costly for d ..."
Abstract - Cited by 32 (13 self) - Add to MetaCart
Optical systems used in photography and cinema produce depth of field effects, that is, variations of focus with depth. These effects are simulated in image synthesis by integrating incoming radiance at each pixel over the lense aperture. Unfortunately, aperture integration is extremely costly for defocused areas where the incoming radiance has high variance, since many samples are then required for a noise-free Monte Carlo integration. On the other hand, using many aperture samples is wasteful in focused areas where the integrand varies little. Similarly, image sampling in defocused areas should be adapted to the very smooth appearance variations due to blurring. This paper introduces an analysis of focusing and depth of field in the frequency domain, allowing a practical characterization of a light field’s frequency content both for image and aperture sampling. Based on this analysis we propose an adaptive depth of field rendering algorithm which optimizes sampling in two important ways. First, image sampling is based on conservative bandwidth prediction and a splatting reconstruction technique ensures correct image reconstruction. Second, at each pixel the variance in the radiance over the aperture is estimated, and used to govern sampling. This technique is easily integrated in any sampling-based renderer, and vastly improves performance.

Visual Attention for Efficient High-Fidelity Graphics

by V. Sundstedt, K. Debattista, P. Longhurst, A. Chalmers, T. Troscianko
"... High-fidelity rendering of complex scenes at interactive rates is one of the primary goals of computer graphics. Since high-fidelity rendering is computationally expensive, perceptual strategies such as visual attention have been explored to achieve this goal. In this paper we investigate how two mo ..."
Abstract - Cited by 28 (10 self) - Add to MetaCart
High-fidelity rendering of complex scenes at interactive rates is one of the primary goals of computer graphics. Since high-fidelity rendering is computationally expensive, perceptual strategies such as visual attention have been explored to achieve this goal. In this paper we investigate how two models of human visual attention can be exploited in a selective rendering system. We examine their effects both individually, and in combination, through psychophysical experiments to measure savings in computation time while preserving the perceived visual quality for a task-related scene. We adapt the lighting simulation system Radiance to support selective rendering, by introducing a selective guidance system which can exploit attentional processes using an importance map. Our experiments demonstrate that viewers performing a visual task within the environment consistently fail to notice the difference between high quality and selectively rendered images, computed in a significantly reduced time.

Effects of global illumination approximations on material appearance

by James A. Ferwerda, Kavita Bala - ACM Trans. Graph , 2010
"... Figure 1: Examples of inequivalent and equivalent VPL rendering. (a)-(b) are VPL renderings with 1k VPLs, and clamp levels C1 = 316 and C8 = 0.1, respectively, that are not equivalent (̸≡) to the reference (d) because they have image artifacts (a) or different perceived material appearance (b). (c) ..."
Abstract - Cited by 18 (4 self) - Add to MetaCart
Figure 1: Examples of inequivalent and equivalent VPL rendering. (a)-(b) are VPL renderings with 1k VPLs, and clamp levels C1 = 316 and C8 = 0.1, respectively, that are not equivalent (̸≡) to the reference (d) because they have image artifacts (a) or different perceived material appearance (b). (c) VPL rendering produces an image that is visually equivalent (≡) to the reference for 100k VPLs and clamp level C4 = 10, even though some reflections are lost where the Dragon is in contact with the pedestal and around its silhouette. Rendering applications in design, manufacturing, ecommerce and other fields are used to simulate the appearance of objects and scenes. Fidelity with respect to appearance is often critical, and calculating global illumination (GI) is an important contributor to image fidelity; but it is expensive to compute. GI approximation methods, such as virtual point light (VPL) algorithms, are efficient, but they can induce image artifacts and distortions of object appearance. In this paper we systematically study the perceptual effects on image quality and material appearance of global illumination approximations made by VPL algorithms. In a series of psychophysical experiments we investigate the relationships between rendering parameters, object properties and image fidelity in a VPL renderer. Using the results of these experiments we analyze how VPL counts and energy clamping levels affect the visibility of image artifacts and distortions of material appearance, and show how object geometry and material properties modulate these effects. We find the ranges of these parameters that produce VPL renderings that are visually equivalent to reference renderings. Further we identify classes of shapes and materials that cannot be accurately rendered using VPL methods with limited resources. Using these findings we propose simple heuristics to guide visually equivalent and efficient rendering, and present a method for correcting energy losses in VPL renderings. This work provides a strong perceptual foundation for a popular and efficient class of GI algorithms.

A Lighting Model for General Participating Media

by Kyle Hegeman , Michael Ashikhmin, Simon Premoze - IN ACM SIGGRAPH SYMPOSIUM ON INTERACTIVE 3D GRAPHICS AND GAMES (I3D) (2005 , 2005
"... Efficient and visually compelling reproduction of effects due to multiple scattering in participating media remains one of the most difficult tasks in computer graphics. Although several fast techniques were recently developed, most of them work only for special types of media (for example, unifor ..."
Abstract - Cited by 13 (0 self) - Add to MetaCart
Efficient and visually compelling reproduction of effects due to multiple scattering in participating media remains one of the most difficult tasks in computer graphics. Although several fast techniques were recently developed, most of them work only for special types of media (for example, uniform or sufficiently dense) or require extensive precomputation. In this paper we present a lighting model for the general case of inhomogeneous medium and demonstrate its implementation on programmable graphics hardware. It is capable of producing high quality imagery at interactive frame rates with only mild assumptions about medium scattering properties and a moderate amount of simple precomputation.

Perceptual rendering of participating media

by Veronica Sundstedt, Diego Gutierrez, Oscar Anson, Francesco Banterle, Alan Chalmers - ACM Transactions on Applied Perception
"... High-fidelity image synthesis is the process of computing images that are perceptually indistinguishable from the real world they are attempting to portray. Such a level of fidelity requires that the physical processes of materials and the behavior of light are accurately simulated. Most computer gr ..."
Abstract - Cited by 13 (6 self) - Add to MetaCart
High-fidelity image synthesis is the process of computing images that are perceptually indistinguishable from the real world they are attempting to portray. Such a level of fidelity requires that the physical processes of materials and the behavior of light are accurately simulated. Most computer graphics algorithms assume that light passes freely between surfaces within an environment. However, in many applications, we also need to take into account how the light interacts with media, such as dust, smoke, fog, etc., between the surfaces. The computational requirements for calculating the interaction of light with such participating media are substantial. This process can take many hours and rendering effort is often spent on computing parts of the scene that may not be perceived by the viewer. In this paper, we present a novel perceptual strategy for physically based rendering of participating media. By using a combination of a saliency map with our new extinction map (X map), we can significantly reduce rendering times for inhomogeneous media. The visual quality of the resulting images is validated using two objective difference metrics and a subjective psychophysical experiment. Although the average pixel errors of these metric are all less than 1%, the subjective validation indicates that the degradation in quality still is noticeable for certain scenes. We thus introduce and validate a novel light map (L map) that accounts for salient features caused by multiple light scattering around

Perceptual influence of approximate visibility in indirect illumination

by Insu Yu, Andrew Cox, Min H. Kim, Tobias Ritschel, Thorsten Grosch, Carsten Dachsbacher, Jan Kautz - ACM Trans. Appl. Percept , 2009
"... Figure 1: Renderings of the arches scene, where the indirect illumination in each image is computed with a different visibility approximation. Our psychophysical study shows that many of these visibility approximations produce images that are perceptually very similar to reference renderings (cf. Fi ..."
Abstract - Cited by 13 (5 self) - Add to MetaCart
Figure 1: Renderings of the arches scene, where the indirect illumination in each image is computed with a different visibility approximation. Our psychophysical study shows that many of these visibility approximations produce images that are perceptually very similar to reference renderings (cf. Fig. 3). In this paper we evaluate the use of approximate visibility for effi-cient global illumination. Traditionally, accurate visibility is used in light transport. However, the indirect illumination we perceive on a daily basis is rarely of high frequency nature, as the most significant aspect of light transport in real-world scenes is diffuse, and thus displays a smooth gradation. This raises the question of whether accurate visibility is perceptually necessary in this case. To answer this question, we conduct a psychophysical study on the perceptual influence of approximate visibility on indirect illumination. This study reveals that accurate visibility is not required and that certain approximations may be introduced.

Quality Assessment of Fractalized NPR Textures: a Perceptual Objective Metric

by Pierre Bénard, Joëlle Thollot, François Sillion
"... Texture fractalization is used in many existing approaches to ensure the temporal coherence of a stylized animation. This paper presents the results of a psychophysical user-study evaluating the relative distortion induced by a fractalization process of typical medium textures. We perform a ranking ..."
Abstract - Cited by 12 (7 self) - Add to MetaCart
Texture fractalization is used in many existing approaches to ensure the temporal coherence of a stylized animation. This paper presents the results of a psychophysical user-study evaluating the relative distortion induced by a fractalization process of typical medium textures. We perform a ranking experiment, assess the agreement among the participants and study the criteria they used. Finally we show that the average co-occurrence error is an efficient quality predictor in this context.

Selective component-based rendering

by Kurt Debattista, Veronica Sundstedt - In Proceedings of GRAPHITE 2005, ACM SIGGRAPH , 2005
"... The computational requirements of full global illumination rendering are such that it is still not possible to achieve high-fidelity graphics of very complex scenes in a reasonable time on a single computer. By identifying which computations are more relevant to the desired quality of the solution, ..."
Abstract - Cited by 10 (2 self) - Add to MetaCart
The computational requirements of full global illumination rendering are such that it is still not possible to achieve high-fidelity graphics of very complex scenes in a reasonable time on a single computer. By identifying which computations are more relevant to the desired quality of the solution, selective rendering can significantly reduce rendering times. In this paper we present a novel component-based selective rendering system in which the quality of every image, and indeed every pixel, can be controlled by means of a component regular expression (crex). The crex provides a flexible mechanism for controlling which components are rendered and in which order. It can be used as a strategy for directing the light transport within a scene and also in a progressive rendering framework. Furthermore, the crex can be combined with visual perception techniques to reduce rendering computation times without compromising the perceived visual quality. By means of a psychophysical experiment we demonstrate how the crex can be successfully used in such a perceptual rendering framework. In addition, we show how the crex’s flexibility enables it to be incorporated in a predictive framework for time-constrained rendering.

5D Covariance Tracing for Efficient Defocus and Motion Blur

by Laurent Belcour, Cyril Soler, KARTIC SUBR, Nicolas Holzschuch, Fredo Durand , 2013
"... The rendering of effects such as motion blur and depth-of-field requires costly 5D integrals. We accelerate their computation through adaptive sampling and reconstruction based on the prediction of the anisotropy and bandwidth of the integrand. For this, we develop a new frequency analysis of the 5D ..."
Abstract - Cited by 9 (4 self) - Add to MetaCart
The rendering of effects such as motion blur and depth-of-field requires costly 5D integrals. We accelerate their computation through adaptive sampling and reconstruction based on the prediction of the anisotropy and bandwidth of the integrand. For this, we develop a new frequency analysis of the 5D temporal light-field, and show that first-order motion can be handled through simple changes of coordinates in 5D. We further introduce a compact representation of the spectrum using the covariance matrix and Gaussian approximations. We derive update equations for the 5 × 5covariance matrices for each atomic light transport event, such as transport, occlusion, BRDF, texture, lens, and motion. The focus on atomic operations makes our work general, and removes the need for special-case formulas. We present a new rendering algorithm that computes 5D covariance matrices on the image plane by tracing paths through the scene, focusing on the single-bounce case. This allows us to reduce sampling rates when appropriate and perform reconstruction of images with complex depth-of-field and motion blur effects.
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University