Results 1  10
of
242
Unstructured lumigraph rendering
 In Computer Graphics, SIGGRAPH 2001 Proceedings
, 2001
"... We describe an image based rendering approach that generalizes many image based rendering algorithms currently in use including light field rendering and viewdependent texture mapping. In particular it allows for lumigraph style rendering from a set of input cameras that are not restricted to a pla ..."
Abstract

Cited by 291 (11 self)
 Add to MetaCart
We describe an image based rendering approach that generalizes many image based rendering algorithms currently in use including light field rendering and viewdependent texture mapping. In particular it allows for lumigraph style rendering from a set of input cameras that are not restricted to a plane or to any specific manifold. In the case of regular and planar input camera positions, our algorithm reduces to a typical lumigraph approach. In the case of fewer cameras and good approximate geometry, our algorithm behaves like viewdependent texture mapping. Our algorithm achieves this flexibility because it is designed to meet a set of desirable goals that we describe. We demonstrate this flexibility with a variety of examples. Keyword ImageBased Rendering 1
Plenoptic sampling
 In SIGGRAPH
, 2000
"... This paper studies the problem of plenoptic sampling in imagebased rendering (IBR). From a spectral analysis of light field signals and using the sampling theorem, we mathematically derive the analytical functions to determine the minimum sampling rate for light field rendering. The spectral support ..."
Abstract

Cited by 249 (15 self)
 Add to MetaCart
(Show Context)
This paper studies the problem of plenoptic sampling in imagebased rendering (IBR). From a spectral analysis of light field signals and using the sampling theorem, we mathematically derive the analytical functions to determine the minimum sampling rate for light field rendering. The spectral support of a light field signal is bounded by the minimum and maximum depths only, no matter how complicated the spectral support might be because of depth variations in the scene. The minimum sampling rate for light field rendering is obtained by compacting the replicas of the spectral support of the sampled light field within the smallest interval. Given the minimum and maximum depths, a reconstruction filter with an optimal and constant depth can be designed to achieve antialiased light field rendering. Plenoptic sampling goes beyond the minimum number of images needed for antialiased light field rendering. More significantly, it utilizes the scene depth information to determine the minimum sampling curve in the joint image and geometry space. The minimum sampling curve quantitatively describes the relationship among three key elements in IBR systems: scene complexity (geometrical and textural information), the number of image samples, and the output resolution. Therefore, plenoptic sampling bridges the gap between imagebased rendering and traditional geometrybased rendering. Experimental results demonstrate the effectiveness of our approach.
Dynamically Reparameterized Light Fields
, 1999
"... An exciting new area in computer graphics is the synthesis of novel images with photographic effect from an initial database of reference images. This is the primary theme of imagebased rendering algorithms. This research extends the light field and lumigraph imagebased rendering methods and greatl ..."
Abstract

Cited by 187 (9 self)
 Add to MetaCart
An exciting new area in computer graphics is the synthesis of novel images with photographic effect from an initial database of reference images. This is the primary theme of imagebased rendering algorithms. This research extends the light field and lumigraph imagebased rendering methods and greatly extends their utility, especially in scenes with much depth variation. First, we have added the ability to vary the apparent focus within a light field using intuitive cameralike controls such as a variable aperture and focus ring. As with lumigraphs, we allow for more general and flexible focal surfaces than a typical focal plane. However, this parameterization works independently of scene geometry; we do not need to recover actual or approximate geometry of the scene for focusing. In addition, we present a method for using multiple focal surfaces in a single image rendering process.
A survey of imagebased rendering techniques
 In Videometrics, SPIE
, 1999
"... In this paper, we survey the techniques for imagebased rendering. Unlike traditional 3D computer graphics in which 3D geometry of the scene is known, imagebased rendering techniques render novel views directly from input images. Previous imagebased rendering techniques can be classified into thre ..."
Abstract

Cited by 177 (11 self)
 Add to MetaCart
(Show Context)
In this paper, we survey the techniques for imagebased rendering. Unlike traditional 3D computer graphics in which 3D geometry of the scene is known, imagebased rendering techniques render novel views directly from input images. Previous imagebased rendering techniques can be classified into three categories according to how much geometric information is used: rendering without geometry, rendering with implicit geometry (i.e., correspondence), and rendering with explicit geometry (either with approximate or accurate geometry). We discuss the characteristics of these categories and their representative methods. The continuum between images and geometry used in imagebased rendering techniques suggests that imagebased rendering with traditional 3D graphics can be united in a joint image and geometry space. Keywords: Imagebased rendering, survey. 1
Image alignment and stitching: a tutorial
, 2006
"... This tutorial reviews image alignment and image stitching algorithms. Image alignment algorithms can discover the correspondence relationships among images with varying degrees of overlap. They are ideally suited for applications such as video stabilization, summarization, and the creation of panora ..."
Abstract

Cited by 115 (2 self)
 Add to MetaCart
This tutorial reviews image alignment and image stitching algorithms. Image alignment algorithms can discover the correspondence relationships among images with varying degrees of overlap. They are ideally suited for applications such as video stabilization, summarization, and the creation of panoramic mosaics. Image stitching algorithms take the alignment estimates produced by such registration algorithms and blend the images in a seamless manner, taking care to deal with potential problems such as blurring or ghosting caused by parallax and scene movement as well as varying image exposures. This tutorial reviews the basic motion models underlying alignment and stitching algorithms, describes effective direct (pixelbased) and featurebased alignment algorithms, and describes blending algorithms used to produce
Survey of imagebased representations and compression techniques
 IEEE TRANS. CIRCUITS SYST. VIDEO TECHNOL
, 2003
"... ..."
The Space of All Stereo Images
, 2001
"... A theory of stereo image formation is presented that enables a complete classification of all possible stereo views, including nonperspective varieties. Towards this end, the notion of epipolar geometry is generalized to apply to multiperspective images. It is shown that any stereo pair must consis ..."
Abstract

Cited by 85 (2 self)
 Add to MetaCart
(Show Context)
A theory of stereo image formation is presented that enables a complete classification of all possible stereo views, including nonperspective varieties. Towards this end, the notion of epipolar geometry is generalized to apply to multiperspective images. It is shown that any stereo pair must consist of rays lying on one of three varieties of quadric surfaces. A unified representation is developed to model all classes of stereo views, based on the concept of a quadric view. The benefits include a unified treatment of projection and triangulation operations for all stereo views. The framework is applied to derive new types of stereo image representations with unusual and useful properties.
Stereo Reconstruction from Multiperspective Panoramas
 Proc. IEEE Int’l Conf. Computer Vision (ICCV), IEEE CS
, 1999
"... This paper presents a new approach to computing depth maps from a large collection of images where the camera motion has been constrained to planar concentric circles. We resample the resulting collection of regular perspective images into a set of multiperspective panoramas, and then compute dep ..."
Abstract

Cited by 82 (10 self)
 Add to MetaCart
(Show Context)
This paper presents a new approach to computing depth maps from a large collection of images where the camera motion has been constrained to planar concentric circles. We resample the resulting collection of regular perspective images into a set of multiperspective panoramas, and then compute depth maps directly from these resampled images. Only a small number of multiperspective panoramas is needed to obtain a dense and accurate 3D reconstruction, since our panoramas sample uniformly in three dimensions: rotation angle, inverse radial distance, and vertical elevation. Using multiperspective panoramas avoids the limited overlap between the original input images that causes problems in conventional multibaseline stereo. Our approach differs from stereo matching of panoramic images taken from different locations, where the epipolar constraints are sine curves. For our multiperspective panoramas, the epipolar geometry, to first order, consists of horizontal lines. Therefore, any traditional stereo algorithm can be applied to multiperspective panoramas without modification. Experimental results show that our approach generates good depth maps that can be used for imagebased rendering tasks such as view interpolation and extrapolation. 1
Mosaicing New Views: The CrossedSlits Projection
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2003
"... Abstract—We introduce a new kind of mosaicing, where the position of the sampling strip varies as a function of the input camera location. The new images that are generated this way correspond to a new projection model defined by two slits, termed here the CrossedSlits (XSlits) projection. In this ..."
Abstract

Cited by 79 (6 self)
 Add to MetaCart
(Show Context)
Abstract—We introduce a new kind of mosaicing, where the position of the sampling strip varies as a function of the input camera location. The new images that are generated this way correspond to a new projection model defined by two slits, termed here the CrossedSlits (XSlits) projection. In this projection model, every 3D point is projected by a ray defined as the line that passes through that point and intersects the two slits. The intersection of the projection rays with the imaging surface defines the image. XSlits mosaicing provides two benefits. First, the generated mosaics are closer to perspective images than traditional pushbroom mosaics. Second, by simple manipulations of the strip sampling function, we can change the location of one of the virtual slits, providing a virtual walkthrough of a Xslits camera; all this can be done without recovering any 3D geometry and without calibration. A number of examples where we translate the virtual camera and change its orientation are given; the examples demonstrate realistic changes in parallax, reflections, and occlusions. Index Terms—Nonstationary mosaicing, crossedslits projection, pushbroom camera, virtual walkthrough, imagebased rendering. 1