Results 1 - 10
of
565
Light Field Rendering
, 1996
"... A number of techniques have been proposed for flying through scenes by redisplaying previously rendered or digitized views. Techniques have also been proposed for interpolating between views by warping input images, using depth information or correspondences between multiple images. In this paper, w ..."
Abstract
-
Cited by 1337 (22 self)
- Add to MetaCart
A number of techniques have been proposed for flying through scenes by redisplaying previously rendered or digitized views. Techniques have also been proposed for interpolating between views by warping input images, using depth information or correspondences between multiple images. In this paper, we describe a simple and robust method for generating new views from arbitrary camera positions without depth information or feature matching, simply by combining and resampling the available images. The key to this technique lies in interpreting the input images as 2D slices of a 4D function - the light field. This function completely characterizes the flow of light through unobstructed space in a static scene with fixed illumination. We describe a
The Lumigraph
- IN PROCEEDINGS OF SIGGRAPH 96
, 1996
"... This paper discusses a new method for capturing the complete appearanceof both synthetic and real world objects and scenes, representing this information, and then using this representation to render images of the object from new camera positions. Unlike the shape capture process traditionally used ..."
Abstract
-
Cited by 1025 (39 self)
- Add to MetaCart
(Show Context)
This paper discusses a new method for capturing the complete appearanceof both synthetic and real world objects and scenes, representing this information, and then using this representation to render images of the object from new camera positions. Unlike the shape capture process traditionally used in computer vision and the rendering process traditionally used in computer graphics, our approach does not rely on geometric representations. Instead we sample and reconstruct a 4D function, which we call a Lumigraph. The Lumigraph is a subset of the complete plenoptic function that describes the flow of light at all positions in all directions. With the Lumigraph, new images of the object can be generated very quickly, independent of the geometric or illumination complexity of the scene or object. The paper discusses a complete working system including the capture of samples, the construction of the Lumigraph, and the subsequent rendering of images from this new representation.
Plenoptic Modeling: An Image-Based Rendering System
, 1995
"... Image-based rendering is a powerful new approach for generating real-time photorealistic computer graphics. It can provide convincing animations without an explicit geometric representation. We use the “plenoptic function” of Adelson and Bergen to provide a concise problem statement for image-based ..."
Abstract
-
Cited by 760 (20 self)
- Add to MetaCart
Image-based rendering is a powerful new approach for generating real-time photorealistic computer graphics. It can provide convincing animations without an explicit geometric representation. We use the “plenoptic function” of Adelson and Bergen to provide a concise problem statement for image-based rendering paradigms, such as morphing and view interpolation. The plenoptic function is a parameterized function for describing everything that is visible from a given point in space. We present an image-based rendering system based on sampling, reconstructing, and resampling the plenoptic function. In addition, we introduce a novel visible surface algorithm and a geometric invariant for cylindrical projections that is equivalent to the epipolar constraint defined for planar projections.
Guided search 2.0: A revised model of visual search
- PSYCHONOMIC BULLETIN & REVIEW
, 1994
"... ..."
Photobook: Content-Based Manipulation of Image Databases
, 1995
"... We describe the Photobook system, which is a set of interactive tools for browsing and searching images and image sequences. These query tools differ from those used in standard image databases in that they make direct use of the image content rather than relying on text annotations. Direct search o ..."
Abstract
-
Cited by 542 (0 self)
- Add to MetaCart
We describe the Photobook system, which is a set of interactive tools for browsing and searching images and image sequences. These query tools differ from those used in standard image databases in that they make direct use of the image content rather than relying on text annotations. Direct search on image content is made possible by use of semantics-preserving image compression, which reduces images to a small set of perceptually-significant coefficients. We describe three types of Photobook descriptions in detail: one that allows search based on appearance, one that uses 2-D shape, and a third that allows search based on textural properties. These image content descriptions can be combined with each other and with textbased descriptions to provide a sophisticated browsing and search capability. In this paper we demonstrate Photobook on databases containing images of people, video keyframes, hand tools, fish, texture swatches, and 3-D medical data.
Incremental Learning for Robust Visual Tracking
, 2008
"... Visual tracking, in essence, deals with nonstationary image streams that change over time. While most existing algorithms are able to track objects well in controlled environments, they usually fail in the presence of significant variation of the object’s appearance or surrounding illumination. On ..."
Abstract
-
Cited by 306 (18 self)
- Add to MetaCart
(Show Context)
Visual tracking, in essence, deals with nonstationary image streams that change over time. While most existing algorithms are able to track objects well in controlled environments, they usually fail in the presence of significant variation of the object’s appearance or surrounding illumination. One reason for such failures is that many algorithms employ fixed appearance models of the target. Such models are trained using only appearance data available before tracking begins, which in practice limits the range of appearances that are modeled, and ignores the large volume of information (such as shape changes or specific lighting conditions) that becomes available during tracking. In this paper, we present a tracking method that incrementally learns a low-dimensional subspace representation, efficiently adapting online to changes in the appearance of the target. The model update, based on incremental algorithms for principal component analysis, includes two important features: a method for correctly updating the sample mean, and a for-
Plenoptic sampling
- In SIGGRAPH
, 2000
"... This paper studies the problem of plenoptic sampling in imagebased rendering (IBR). From a spectral analysis of light field signals and using the sampling theorem, we mathematically derive the analytical functions to determine the minimum sampling rate for light field rendering. The spectral support ..."
Abstract
-
Cited by 249 (15 self)
- Add to MetaCart
(Show Context)
This paper studies the problem of plenoptic sampling in imagebased rendering (IBR). From a spectral analysis of light field signals and using the sampling theorem, we mathematically derive the analytical functions to determine the minimum sampling rate for light field rendering. The spectral support of a light field signal is bounded by the minimum and maximum depths only, no matter how complicated the spectral support might be because of depth variations in the scene. The minimum sampling rate for light field rendering is obtained by compacting the replicas of the spectral support of the sampled light field within the smallest interval. Given the minimum and maximum depths, a reconstruction filter with an optimal and constant depth can be designed to achieve anti-aliased light field rendering. Plenoptic sampling goes beyond the minimum number of images needed for anti-aliased light field rendering. More significantly, it utilizes the scene depth information to determine the minimum sampling curve in the joint image and geometry space. The minimum sampling curve quantitatively describes the relationship among three key elements in IBR systems: scene complexity (geometrical and textural information), the number of image samples, and the output resolution. Therefore, plenoptic sampling bridges the gap between image-based rendering and traditional geometrybased rendering. Experimental results demonstrate the effectiveness of our approach.
Rendering with concentric mosaics
- in Proc. SIGGRAPH
, 1999
"... This paper presents a novel 3D plenoptic function, which we call concentric mosaics. We constrain camera motion to planar concentric circles, and create concentric mosaics using a manifold mosaic for each circle (i.e., composing slit images taken at different locations). Concentric mosaics index all ..."
Abstract
-
Cited by 242 (29 self)
- Add to MetaCart
(Show Context)
This paper presents a novel 3D plenoptic function, which we call concentric mosaics. We constrain camera motion to planar concentric circles, and create concentric mosaics using a manifold mosaic for each circle (i.e., composing slit images taken at different locations). Concentric mosaics index all input image rays naturally in 3 parameters: radius, rotation angle and vertical elevation. Novel views are rendered by combining the appropriate captured rays in an efficient manner at rendering time. Although vertical distortions exist in the rendered images, they can be alleviated by depth correction. Like panoramas, concentric mosaics do not require recovering geometric and photometric scene models. Moreover, concentric mosaics provide a much richer user experience by allowing the user to move freely in a circular region and observe significant parallax and lighting changes. Compared with a Lightfield or Lumigraph, concentric mosaics have much smaller file size because only a 3D plenoptic function is constructed. Concentric mosaics have good space and computational efficiency, and are very easy to capture. This paper discusses a complete working system from capturing, construction, compression, to rendering of concentric mosaics from synthetic and real environments.
A Theory of Single-Viewpoint Catadioptric Image Formation
- International Journal of Computer Vision
, 1999
"... Conventional video cameras have limited fields of view which make them restrictive for certain applications in computational vision. A catadioptric sensor uses a combination of lenses and mirrors placed in a carefully arranged configuration to capture a much wider field of view. One important design ..."
Abstract
-
Cited by 236 (12 self)
- Add to MetaCart
Conventional video cameras have limited fields of view which make them restrictive for certain applications in computational vision. A catadioptric sensor uses a combination of lenses and mirrors placed in a carefully arranged configuration to capture a much wider field of view. One important design goal for catadioptric sensors is choosing the shapes of the mirrors in a way that ensures that the complete catadioptric system has a single effective viewpoint. The reason a single viewpoint is so desirable is that it is a requirement for the generation of pure perspective images from the sensed images. In this paper, we derive the complete class of single-lens single-mirror catadioptric sensors that have a single viewpoint. We describe all of the solutions in detail, including the degenerate ones, with reference to many of the catadioptric systems that have been proposed in the literature. In addition, we derive a simple expression for the spatial resolution of a catadioptric sensor in te...