Results 1 - 10
of
249
Unstructured lumigraph rendering
- In Computer Graphics, SIGGRAPH 2001 Proceedings
, 2001
"... We describe an image based rendering approach that generalizes many image based rendering algorithms currently in use including light field rendering and view-dependent texture mapping. In particular it allows for lumigraph style rendering from a set of input cameras that are not restricted to a pla ..."
Abstract
-
Cited by 291 (11 self)
- Add to MetaCart
(Show Context)
We describe an image based rendering approach that generalizes many image based rendering algorithms currently in use including light field rendering and view-dependent texture mapping. In particular it allows for lumigraph style rendering from a set of input cameras that are not restricted to a plane or to any specific manifold. In the case of regular and planar input camera positions, our algorithm reduces to a typical lumigraph approach. In the case of fewer cameras and good approximate geometry, our algorithm behaves like view-dependent texture mapping. Our algorithm achieves this flexibility because it is designed to meet a set of desirable goals that we describe. We demonstrate this flexibility with a variety of examples. Keyword Image-Based Rendering 1
Dynamically Reparameterized Light Fields
, 1999
"... An exciting new area in computer graphics is the synthesis of novel images with photographic effect from an initial database of reference images. This is the primary theme of imagebased rendering algorithms. This research extends the light field and lumigraph image-based rendering methods and greatl ..."
Abstract
-
Cited by 187 (9 self)
- Add to MetaCart
An exciting new area in computer graphics is the synthesis of novel images with photographic effect from an initial database of reference images. This is the primary theme of imagebased rendering algorithms. This research extends the light field and lumigraph image-based rendering methods and greatly extends their utility, especially in scenes with much depth variation. First, we have added the ability to vary the apparent focus within a light field using intuitive camera-like controls such as a variable aperture and focus ring. As with lumigraphs, we allow for more general and flexible focal surfaces than a typical focal plane. However, this parameterization works independently of scene geometry; we do not need to recover actual or approximate geometry of the scene for focusing. In addition, we present a method for using multiple focal surfaces in a single image rendering process.
A survey of image-based rendering techniques
- In Videometrics, SPIE
, 1999
"... In this paper, we survey the techniques for image-based rendering. Unlike traditional 3D computer graphics in which 3D geometry of the scene is known, image-based rendering techniques render novel views directly from input images. Previous image-based rendering techniques can be classified into thre ..."
Abstract
-
Cited by 177 (11 self)
- Add to MetaCart
(Show Context)
In this paper, we survey the techniques for image-based rendering. Unlike traditional 3D computer graphics in which 3D geometry of the scene is known, image-based rendering techniques render novel views directly from input images. Previous image-based rendering techniques can be classified into three categories according to how much geometric information is used: rendering without geometry, rendering with implicit geometry (i.e., correspondence), and rendering with explicit geometry (either with approximate or accurate geometry). We discuss the characteristics of these categories and their representative methods. The continuum between images and geometry used in image-based rendering techniques suggests that image-based rendering with traditional 3D graphics can be united in a joint image and geometry space. Keywords: Image-based rendering, survey. 1
Fourier slice photography
- in Proceedings of the International Conference on Computer Graphics and Interactive Techniques (SIGGRAPH ’05
, 2005
"... Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of ..."
Abstract
-
Cited by 127 (4 self)
- Add to MetaCart
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Permissions
Survey of image-based representations and compression techniques
- IEEE TRANS. CIRCUITS SYST. VIDEO TECHNOL
, 2003
"... ..."
Real-Time Consensus-Based Scene Reconstruction using Commodity Graphics Hardware
, 2002
"... that effectively combines a plane-sweeping algorithm with view synthesis for real-time, on-line 3D scene acquisition and view synthesis. Using real-time imagery from a few calibrated cameras, our method can generate new images from nearby viewpoints, estimate a dense depth map from the current viewp ..."
Abstract
-
Cited by 81 (7 self)
- Add to MetaCart
that effectively combines a plane-sweeping algorithm with view synthesis for real-time, on-line 3D scene acquisition and view synthesis. Using real-time imagery from a few calibrated cameras, our method can generate new images from nearby viewpoints, estimate a dense depth map from the current viewpoint, or create a textured triangular mesh. We can do each of these without any prior geometric information or requiring any user interaction, in real time and on line. The heart of our method is to use programmable Pixel Shader technology to square intensity differences between reference image pixels, and then to choose final colors (or depths) that correspond to the minimum difference, i.e. the most consistent color.
The light field video camera
- in Media Processors 2002
, 2002
"... We present the Light Field Video Camera, an array of CMOS image sensors for video image based rendering applications. The device is designed to record a synchronized video dataset from over one hundred cameras to a hard disk array using as few as one PC per fifty image sensors. It is intended to be ..."
Abstract
-
Cited by 74 (5 self)
- Add to MetaCart
We present the Light Field Video Camera, an array of CMOS image sensors for video image based rendering applications. The device is designed to record a synchronized video dataset from over one hundred cameras to a hard disk array using as few as one PC per fifty image sensors. It is intended to be flexible, modular and scalable, with much visibility and control over the cameras. The Light Field Video Camera is a modular embedded design based on the IEEE1394 High Speed Serial Bus, with an image sensor and MPEG2 compression at each node. We show both the flexibility and scalability of the design with a six camera prototype.
Determining Reflectance Parameters and Illumination Distribution from a Sparse Set of Images for View-dependent Image Synthesis
- In ICCV01
, 2001
"... A framework for photo-realistic view-dependent image synthesis of a shiny object from a sparse set of images and a geometric model is proposed. Each image is aligned with the 3D model and decomposed into two images with regards to the reflectance components based on the intensity variation of object ..."
Abstract
-
Cited by 46 (5 self)
- Add to MetaCart
(Show Context)
A framework for photo-realistic view-dependent image synthesis of a shiny object from a sparse set of images and a geometric model is proposed. Each image is aligned with the 3D model and decomposed into two images with regards to the reflectance components based on the intensity variation of object surface points. The view-independent surface reflection (diffuse reflection) is stored as one texture map. The view-dependent reflection (specular reflection) images are used to recover the initial approximation of the illumination distribution, and then a two step numerical minimization algorithm utilizing a simplified Torrance-Sparrow reflection model is used to estimate the reflectance parameters and refine the illumination distribution. This provides a very compact representation of the data necessary to render synthetic images from arbitrary viewpoints. We have conducted experiments with real objects to synthesize photorealistic view-dependent images within the proposed framework. 1.
Layered 3D: Tomographic Image Synthesis for Attenuation-based Light Field and High Dynamic Range Displays
, 2011
"... We develop tomographic techniques for image synthesis on displays composed of compact volumes of light-attenuating material. Such volumetric attenuators recreate a 4D light field or highcontrast 2D image when illuminated by a uniform backlight. Since arbitrary oblique views may be inconsistent with ..."
Abstract
-
Cited by 45 (17 self)
- Add to MetaCart
We develop tomographic techniques for image synthesis on displays composed of compact volumes of light-attenuating material. Such volumetric attenuators recreate a 4D light field or highcontrast 2D image when illuminated by a uniform backlight. Since arbitrary oblique views may be inconsistent with any single attenuator, iterative tomographic reconstruction minimizes the difference between the emitted and target light fields, subject to physical constraints on attenuation. As multi-layer generalizations of conventional parallax barriers, such displays are shown, both by theory and experiment, to exceed the performance of existing dual-layer architectures. For 3D display, spatial resolution, depth of field, and brightness are increased, compared to parallax barriers. For a plane at a fixed depth, our optimization also allows optimal construction of high dynamic range displays, confirming existing heuristics and providing the first extension to multiple, disjoint layers. We conclude by demonstrating the benefits and limitations of attenuationbased light field displays using an inexpensive fabrication method: separating multiple printed transparencies with acrylic sheets.