Results 1 - 10
of
497
Gradient Domain High Dynamic Range Compression
- PROCEEDINGS OF ACM SIGGRAPH 2002
, 2002
"... We present a new method for rendering high dynamic range images on conventional displays. Our method is conceptually simple, computationally efficient, robust, and easy to use. We manipulate the gradient field of the luminance image by attenuating the magnitudes of large gradients. A new, low dynami ..."
Abstract
-
Cited by 380 (10 self)
- Add to MetaCart
We present a new method for rendering high dynamic range images on conventional displays. Our method is conceptually simple, computationally efficient, robust, and easy to use. We manipulate the gradient field of the luminance image by attenuating the magnitudes of large gradients. A new, low dynamic range image is then obtained by solving a Poisson equation on the modified gradient field. Our results demonstrate that the method is capable of drastic dynamic range compression, while preserving fine details and avoiding common artifacts, such as halos, gradient reversals, or loss of local contrast. The method is also able to significantly enhance ordinary images by bringing out detail in dark regions.
Photographic tone reproduction for digital images
- IN: PROC. OF SIGGRAPH’02
, 2002
"... A classic photographic task is the mapping of the potentially high dynamic range of real world luminances to the low dynamic range of the photographic print. This tone reproduction problem is also faced by computer graphics practitioners who map digital images to a low dynamic range print or screen. ..."
Abstract
-
Cited by 349 (17 self)
- Add to MetaCart
A classic photographic task is the mapping of the potentially high dynamic range of real world luminances to the low dynamic range of the photographic print. This tone reproduction problem is also faced by computer graphics practitioners who map digital images to a low dynamic range print or screen. The work presented in this paper leverages the time-tested techniques of photographic practice to develop a new tone reproduction operator. In particular, we use and extend the techniques developed by Ansel Adams to deal with digital images. The resulting algorithm is simple and produces good results for a wide variety of images.
Color Constancy: A Method for Recovering Surface Spectral Reflectance
, 1986
"... this paper we describe an algorithm for estimating the surface reflectance functions of objects in a scene with incomplete knowledge of the spectral power distribution of the ambient light. We assume that lights and surfaces present in the environment are constrained in a way that we make explicit b ..."
Abstract
-
Cited by 261 (9 self)
- Add to MetaCart
this paper we describe an algorithm for estimating the surface reflectance functions of objects in a scene with incomplete knowledge of the spectral power distribution of the ambient light. We assume that lights and surfaces present in the environment are constrained in a way that we make explicit below. ' An image-processing system using this algorithm can assign colors that are constant despite changes in the lighting on the scene. This capability is essential to correct color rendering in photography, in television, and in the construction of artificial visual systems for robotics. We describe how constraints on lights and surfaces in the environment make color constancy possible for a visual system and discuss the implications of the algorithm and these constraints for human color vision
Deriving Intrinsic Images from Image Sequences
, 2001
"... Intrinsic images are a useful midlevel description of scenes proposed by Barrow and Tenebaum [1]. An image is decomposed into two images: a reflectance image and an illumination image. Finding such a decomposition remains a difficult problem in computer vision. Here we focus on a slightly easier pro ..."
Abstract
-
Cited by 253 (5 self)
- Add to MetaCart
(Show Context)
Intrinsic images are a useful midlevel description of scenes proposed by Barrow and Tenebaum [1]. An image is decomposed into two images: a reflectance image and an illumination image. Finding such a decomposition remains a difficult problem in computer vision. Here we focus on a slightly easier problem: given a sequence of T images where the reflectance is constant and the illumination changes, can we recover T illumination images and a single reflectance image? We show that this problem is still ill-posed and suggest approaching it as a maximum-likelihood estimation problem. Following recent work on the statistics of natural images, we use a prior that assumes that illumination images will give rise to sparse filter outputs. We show that this leads to a simple, novel algorithm for recovering reflectance images. We illustrate the algorithm's performance on real and synthetic image sequences.
A Signal-Processing Framework for Inverse Rendering
- In SIGGRAPH 01
, 2001
"... Realism in computer-generated images requires accurate input models for lighting, textures and BRDFs. One of the best ways of obtaining high-quality data is through measurements of scene attributes from real photographs by inverse rendering. However, inverse rendering methods have been largely limit ..."
Abstract
-
Cited by 248 (21 self)
- Add to MetaCart
Realism in computer-generated images requires accurate input models for lighting, textures and BRDFs. One of the best ways of obtaining high-quality data is through measurements of scene attributes from real photographs by inverse rendering. However, inverse rendering methods have been largely limited to settings with highly controlled lighting. One of the reasons for this is the lack of a coherent mathematical framework for inverse rendering under general illumination conditions. Our main contribution is the introduction of a signal-processing framework which describes the reflected light field as a convolution of the lighting and BRDF, and expresses it mathematically as a product of spherical harmonic coefficients of the BRDF and the lighting. Inverse rendering can then be viewed as deconvolution. We apply this theory to a variety of problems in inverse rendering, explaining a number of previous empirical results. We will show why certain problems are ill-posed or numerically ill-conditioned, and why other problems are more amenable to solution. The theory developed here also leads to new practical representations and algorithms. For instance, we present a method to factor the lighting and BRDF from a small number of views, i.e. to estimate both simultaneously when neither is known.
Comparing Images Using Color Coherence Vectors
, 1996
"... Color histograms are used to compare images in many applications. Their advantages are efficiency, and insensitivity to small changes in camera viewpoint. However, color histograms lack spatial information, so images with very di#erent appearances can have similar histograms. For example, a picture ..."
Abstract
-
Cited by 237 (1 self)
- Add to MetaCart
Color histograms are used to compare images in many applications. Their advantages are efficiency, and insensitivity to small changes in camera viewpoint. However, color histograms lack spatial information, so images with very di#erent appearances can have similar histograms. For example, a picture of fall foliage might contain a large number of scattered red pixels
On the Removal of Shadows from Images
, 2006
"... This paper is concerned with the derivation of a progression of shadow-free image representations. First, we show that adopting certain assumptions about lights and cameras leads to a 1D, gray-scale image representation which is illuminant invariant at each image pixel. We show that as a consequenc ..."
Abstract
-
Cited by 236 (18 self)
- Add to MetaCart
(Show Context)
This paper is concerned with the derivation of a progression of shadow-free image representations. First, we show that adopting certain assumptions about lights and cameras leads to a 1D, gray-scale image representation which is illuminant invariant at each image pixel. We show that as a consequence, images represented in this form are shadow-free. We then extend this 1D representation to an equivalent 2D, chromaticity representation. We show that in this 2D representation, it is possible to relight all the image pixels in the same way, effectively deriving a 2D image representation which is additionally shadow-free. Finally, we show how to recover a 3D, full color shadow-free image representation by first (with the help of the 2D representation) identifying shadow edges. We then remove shadow edges from the edge-map of the original image by edge in-painting and we propose a method to reintegrate this thresholded edge map, thus deriving the sought-after 3D shadow-free image.
Bayesian color constancy
- Journal of the Optical Society of America A
, 1997
"... The problem of color constancy may be solved if we can recover the physical properties of illuminants and surfaces from photosensor responses. We consider this problem within the framework of Bayesian decision theory. First, we model the relation among illuminants, surfaces, and photosensor response ..."
Abstract
-
Cited by 188 (23 self)
- Add to MetaCart
The problem of color constancy may be solved if we can recover the physical properties of illuminants and surfaces from photosensor responses. We consider this problem within the framework of Bayesian decision theory. First, we model the relation among illuminants, surfaces, and photosensor responses. Second, we construct prior distributions that describe the probability that particular illuminants and surfaces exist in the world. Given a set of photosensor responses, we can then use Bayes’s rule to compute the posterior distribution for the illuminants and the surfaces in the scene. There are two widely used methods for obtaining a single best estimate from a posterior distribution. These are maximum a posteriori (MAP) and minimum mean-squared-error (MMSE) estimation. We argue that neither is appropriate for perception problems. We describe a new estimator, which we call the maximum local mass (MLM) estimate, that integrates local probability density. The new method uses an optimality criterion that is appropriate for perception tasks: It finds the most probable approximately correct answer. For the case of low observation noise, we provide an efficient approximation. We develop the MLM estimator for the color-constancy problem in which flat matte surfaces are uniformly illuminated. In simulations we show that the MLM method performs better than the MAP estimator and better than a number of standard color-constancy algorithms. We note conditions under which even the optimal estimator produces poor estimates: when the spectral properties of the surfaces in the scene are biased. © 1997 Optical Society of America [S0740-3232(97)01607-4] 1.
Anatomy and physiology of a color system in the primate visual cortex
- Journal of Neuroscience
, 1984
"... Staining for the mitochondrial enzyme cytochrome oxidase reveals an array of dense regions (blobs) in the primate primary visual cortex. They are most obvious in the upper layers, 2 and 3, but can also be seen in layers 4B, 5, and 6, in register with the blobs in layers 2 and 3. We compared cells in ..."
Abstract
-
Cited by 169 (3 self)
- Add to MetaCart
(Show Context)
Staining for the mitochondrial enzyme cytochrome oxidase reveals an array of dense regions (blobs) in the primate primary visual cortex. They are most obvious in the upper layers, 2 and 3, but can also be seen in layers 4B, 5, and 6, in register with the blobs in layers 2 and 3. We compared cells inside and outside blobs in macaque and squirrel monkeys, looking at their physiological responses and anatomical connections. Cells within blobs did not show orientation selectivity, whereas cells between blobs were highly orientation selective. Receptive fields of blob cells had circular symmetry and were of three main types, Broad-Band Center-Surround, Red-Green Double-Opponent, and Yellow-Blue Double-Opponent. Double-Opponent cells responded poorly or not at all to white light in any form, or to diffuse light at any wavelength. In contrast to blob cells, none of the cells recorded in layer 4Cp were Double-Opponent: like the majority of cells in the parvocellular geniculate layers, they were either Broad-Band or Color-Opponent Center-Surround, e.g., red-oncenter green-off-surround. To our surprise cells in layer 4Ca were orientation selective. In tangential penetrations throughout layers 2 and 3, optium orientation, when plotted against electrode position, formed long, regular, usually linear sequences, which were interrupted but not perturbed by the
Color Based Object Recognition
- Pattern Recognition
, 1997
"... This paper is organized as follows. In Section 2, the dichromatic reflectance under "white" reflection is introduced and new photometric invariant color features are proposed. The performance of object recognition by histogram matching differentiated for the various color models is evaluat ..."
Abstract
-
Cited by 149 (26 self)
- Add to MetaCart
(Show Context)
This paper is organized as follows. In Section 2, the dichromatic reflectance under "white" reflection is introduced and new photometric invariant color features are proposed. The performance of object recognition by histogram matching differentiated for the various color models is evaluated and compared on an image database of 500 reference images in Section 3. 2 Photometric Color Invariance In this paper, we concentrate on the following standard, essentially different, color features derived from RGB: intensity I(R; G; B) = R + B + G, RGB, normalized colors r(R; G; B) = R R+G+B , g(R; G; B) = G R+G+B , b(R; G; B) = B R+G+B , hue H(R; G; B) = arctan \Gamma p 3(G\GammaB) (R\GammaG)+(R\GammaB) \Delta and saturation S(R; G; B) = 1 \Gamma min(R;G;B) R+G+B . 2.1 The Reflection Model Consider an image of an infinitesimal surface patch. Using the red, green and blue sensors with spectral sensitivities given by fR (), fG () and fB () respectively, to obtain an image of the surface patch illuminated by a SPD of the incident light denoted by e(), the measured sensor values will be given by Shafer [5]: C = m b (n; s) Z