Results 1 -
3 of
3
Adaptive Image Synthesis for Compressive Displays
- ACM Trans. Graph. (SIGGRAPH
, 2013
"... Figure 1: Adaptive light field synthesis for a dual-layer compressive display. By combining sampling, rendering, and display-specific opti-mization into a single framework, the proposed algorithm facilitates light field synthesis with significantly reduced computational resources. Redundancy in the ..."
Abstract
-
Cited by 5 (3 self)
- Add to MetaCart
Figure 1: Adaptive light field synthesis for a dual-layer compressive display. By combining sampling, rendering, and display-specific opti-mization into a single framework, the proposed algorithm facilitates light field synthesis with significantly reduced computational resources. Redundancy in the light field as well as limitations of display hardware are exploited to generate high-quality reconstructions (center left column) for a high-resolution target light field of 85 × 21 views with 840 × 525 pixels each (center). Our adaptive reconstruction uses only 3.82 % of the rays in the full target light field (left column), thus providing significant savings both during rendering and during the computation of the display parameters. The proposed framework allows for higher-resolution light fields, better 3D effects, and perceptually correct animations to be presented on emerging compressive displays (right columns). Recent years have seen proposals for exciting new computational display technologies that are compressive in the sense that they generate high resolution images or light fields with relatively few display parameters. Image synthesis for these types of displays in-volves two major tasks: sampling and rendering high-dimensional target imagery, such as light fields or time-varying light fields, as well as optimizing the display parameters to provide a good ap-proximation of the target content. In this paper, we introduce an adaptive optimization framework for compressive displays that generates high quality images and light fields using only a fraction of the total plenoptic samples. We demonstrate the framework for a large set of display technologies, including several types of auto-stereoscopic displays, high dynamic range displays, and high-resolution displays. We achieve significant performance gains, and in some cases are able to process data that would be infeasible with existing methods.
Sample-Based Manifold Filtering for Interactive Global Illumination and Depth of Field
"... Figure 1: Examples of the noisy input images generated by a Monte Carlo-renderer and our filtered results. Our denoising technique allows rendering at interactive frame rates. We present a fast reconstruction filtering method for images generated with Monte Carlo-based rendering tech-niques. Our app ..."
Abstract
- Add to MetaCart
Figure 1: Examples of the noisy input images generated by a Monte Carlo-renderer and our filtered results. Our denoising technique allows rendering at interactive frame rates. We present a fast reconstruction filtering method for images generated with Monte Carlo-based rendering tech-niques. Our approach specializes in reducing global illumination noise in the presence of depth-of-field effects at very low sampling rates and interactive frame rates. We employ edge-aware filtering in the sample space to locally improve outgoing radiance of each sample. The improved samples are then distributed in the image plane using a fast, linear manifold-based approach supporting very large circles of confusion. We evaluate our filter by applying it to several images containing noise caused by Monte Carlo-simulated global illumination, area light sources and depth-of-field. We show that our filter can efficiently denoise such images at interactive frame rates on current GPUs and with as few as four to 16 samples per pixel. Our method operates only on the color and geometric sample information output of the initial rendering process. It does not make any assumptions on the underlying rendering technique and sampling strategy and can therefore be implemented completely as a post-process filter.