Results 1  10
of
102
Bayesian color constancy
 Journal of the Optical Society of America A
, 1997
"... The problem of color constancy may be solved if we can recover the physical properties of illuminants and surfaces from photosensor responses. We consider this problem within the framework of Bayesian decision theory. First, we model the relation among illuminants, surfaces, and photosensor response ..."
Abstract

Cited by 188 (23 self)
 Add to MetaCart
The problem of color constancy may be solved if we can recover the physical properties of illuminants and surfaces from photosensor responses. We consider this problem within the framework of Bayesian decision theory. First, we model the relation among illuminants, surfaces, and photosensor responses. Second, we construct prior distributions that describe the probability that particular illuminants and surfaces exist in the world. Given a set of photosensor responses, we can then use Bayes’s rule to compute the posterior distribution for the illuminants and the surfaces in the scene. There are two widely used methods for obtaining a single best estimate from a posterior distribution. These are maximum a posteriori (MAP) and minimum meansquarederror (MMSE) estimation. We argue that neither is appropriate for perception problems. We describe a new estimator, which we call the maximum local mass (MLM) estimate, that integrates local probability density. The new method uses an optimality criterion that is appropriate for perception tasks: It finds the most probable approximately correct answer. For the case of low observation noise, we provide an efficient approximation. We develop the MLM estimator for the colorconstancy problem in which flat matte surfaces are uniformly illuminated. In simulations we show that the MLM method performs better than the MAP estimator and better than a number of standard colorconstancy algorithms. We note conditions under which even the optimal estimator produces poor estimates: when the spectral properties of the surfaces in the scene are biased. © 1997 Optical Society of America [S07403232(97)016074] 1.
Coloring local feature extraction
 In ECCV, 2006. MENSINK et al.: TMRF FOR IMAGE AUTOANNOTATION
"... Abstract. Although color is commonly experienced as an indispensable quality in describing the world around us, stateofthe art local featurebased representations are mostly based on shape description, and ignore color information. The description of color is hampered by the large amount of variati ..."
Abstract

Cited by 111 (20 self)
 Add to MetaCart
(Show Context)
Abstract. Although color is commonly experienced as an indispensable quality in describing the world around us, stateofthe art local featurebased representations are mostly based on shape description, and ignore color information. The description of color is hampered by the large amount of variations which causes the measured color values to vary significantly. In this paper we aim to extend the description of local features with color information. To accomplish a wide applicability of the color descriptor, it should be robust to: 1. photometric changes commonly encountered in the real world, 2. varying image quality, from high quality images to snapshot photo quality and compressed internet images. Based on these requirements we derive a set of color descriptors. The set of proposed descriptors are compared by extensive testing on multiple applications areas, namely, matching, retrieval and classification, and on a wide variety of image qualities. The results show that color descriptors remain reliable under photometric and geometrical changes, and with decreasing image quality. For all experiments a combination of color and shape outperforms a pure shapebased approach. 1
Object Recognition using Local Affine Frames on Maximally Stable Extremal Regions
"... Viewpointindependent recognition of objects is a fundamental problem in computer vision. Recently, ..."
Abstract

Cited by 72 (2 self)
 Add to MetaCart
Viewpointindependent recognition of objects is a fundamental problem in computer vision. Recently,
Comprehensive Colour Image Normalization
, 1998
"... . The same scene viewed under two different illuminants induces two different colour images. If the two illuminants are the same colour but are placed at different positions then corresponding rgb pixels are related by simple scale factors. In contrast if the lighting geometry is held fixed but the ..."
Abstract

Cited by 63 (6 self)
 Add to MetaCart
. The same scene viewed under two different illuminants induces two different colour images. If the two illuminants are the same colour but are placed at different positions then corresponding rgb pixels are related by simple scale factors. In contrast if the lighting geometry is held fixed but the colour of the light changes then it is the individual colour channels (e.g. all the red pixel values or all the green pixels) that are a scaling apart. It is well known that the image dependencies due to lighting geometry and illuminant colour can be respectively removed by normalizing the magnitude of the rgb pixel triplets (e.g. by calculating chromaticities) and by normalizing the lengths of each colour channel (by running the `greyworld' colour constancy algorithm). However, neither normalization suffices to account for changes in both the lighting geometry and illuminant colour. In this paper we present a new comprehensive image normalization which removes image dependency on lighting...
Color Angular Indexing
, 1996
"... . A fast colorbased algorithm for recognizing colorful objects and colored textures is presented. Objects and textures are represented by just six numbers. Let r, g and b denote the 3 color bands of the image of an object (stretched out as vectors) then the color angular index comprises the 3 i ..."
Abstract

Cited by 51 (6 self)
 Add to MetaCart
. A fast colorbased algorithm for recognizing colorful objects and colored textures is presented. Objects and textures are represented by just six numbers. Let r, g and b denote the 3 color bands of the image of an object (stretched out as vectors) then the color angular index comprises the 3 interband angles (one per pair of image vectors). The color edge angular index is calculated from the image's color edge map (the Laplacian of the color bands) in a similar way. These angles capture important loworder statistical information about the color and edge distributions and invariant to the spectral power distribution of the scene illuminant. The 6 illuminationinvariant angles provide the basis for angular indexing into a database of objects or textures and has been tested on both Swain's database of color objects which were all taken under the same illuminant and Healey and Wang's database of color textures which were taken under several different illuminants. Color an...
Gamut constrained illuminant estimation
 International Journal of Computer Vision
, 2006
"... This paper presents a novel solution to the illuminant estimation problem: the problem of how, given an image of a scene taken under an unknown illuminant, we can recover an estimate of that light. The work is founded on previous gamut mapping solutions to the problem which solve for a scene illumin ..."
Abstract

Cited by 47 (0 self)
 Add to MetaCart
(Show Context)
This paper presents a novel solution to the illuminant estimation problem: the problem of how, given an image of a scene taken under an unknown illuminant, we can recover an estimate of that light. The work is founded on previous gamut mapping solutions to the problem which solve for a scene illuminant by determining the set of diagonal mappings which take image data captured under an unknown light to a gamut of reference colours taken under a known light. Unfortunately a diagonal model is not always a valid model of illumination change and so previous approaches sometimes return a null solution. In addition, previous methods are difficult to implement. We address these problems by recasting the problem as one of illuminant classification: we define aprioria set of plausible lights thus ensuring that a scene illuminant estimate will always be found. A plausible light is represented by the gamut of colours observable under it and the illuminant in an image is classified by determining the plausible light whose gamut is most consistent with the image data. We show that this step (the main computational burden of the algorithm) can be performed simply, quickly, and efficiently by means of a nonnegative leastsquares optimisation. We report results on a large set of real images which show that it provides excellent illuminant estimation, outperforming previous algorithms. 1.
Moment Invariants for Recognition under Changing Viewpoint and Illumination
 Comput. Vis. Imag Underst
, 2004
"... Generalised color moments combine shape and color information and put them on an equal footing. Rational expressions of such moments can be designed, that are invariant under both geometric deformations and photometric changes. These generalised color moment invariants are e#ective features for reco ..."
Abstract

Cited by 45 (7 self)
 Add to MetaCart
Generalised color moments combine shape and color information and put them on an equal footing. Rational expressions of such moments can be designed, that are invariant under both geometric deformations and photometric changes. These generalised color moment invariants are e#ective features for recognition under changing viewpoint and illumination. The paper gives a systematic overview of such moment invariants for several combinations of deformations and photometric changes. Their validity and potential is corroborated through a series of experiments. Both the cases of indoor and outdoor images are considered, as illumination changes tend to di#er between these circumstances. Although the generalised color moment invariants are extracted from planar surface patches, it is argued that invariant neighbourhoods o#er a concept through which they can also be used to deal with 3D objects and scenes.
Color Constancy
, 2007
"... The ability to compute color constant descriptors of objects in view irrespective of the light illuminating the scene is called color constancy. We have used genetic programming to evolve an algorithm for color constancy. The algorithm runs on a grid of processing elements. Each processing element i ..."
Abstract

Cited by 39 (6 self)
 Add to MetaCart
(Show Context)
The ability to compute color constant descriptors of objects in view irrespective of the light illuminating the scene is called color constancy. We have used genetic programming to evolve an algorithm for color constancy. The algorithm runs on a grid of processing elements. Each processing element is connected to neighboring processing elements. Information exchange can therefore only occur locally. Randomly generated color Mondrians were used as test cases. The evolved individual was tested on synthetic as well as real input images. Encouraged by these results we developed a parallel algorithm for color constancy. This algorithm is based on the computation of local space average color. Local space average color is used to estimate the illuminant locally for each image pixel. Given an estimate of the illuminant, we can compute the reflectances of the corresponding object points. The algorithm can be easily mapped to a neural architecture and could be implemented directly in CCD or CMOS chips used in todays cameras.
Differential Invariants for Color Images
, 1998
"... We present in this paper a new method for matching points in stereoscopic color images, based on color differential invariants involving only first order derivatives of images. Our method is able to match robustly the images even if they present important transformations like rotation, range of v ..."
Abstract

Cited by 36 (5 self)
 Add to MetaCart
We present in this paper a new method for matching points in stereoscopic color images, based on color differential invariants involving only first order derivatives of images. Our method is able to match robustly the images even if they present important transformations like rotation, range of viewpoint and change of intensity between each other. We present here a generalization of a gray level corner detector to the case of color images. This detector is robust and allows us to extract point primitives in stereoscopic images to be matched together, only with first order derivatives. We then describe these points with our set of local color invariants, and we propose a simple and efficient scheme for matching them. The robustness of the matching against local deformations is shown using deformations of single color images, then our stereo matching scheme is evaluated using true stereo color images with viewpoint variations. The results obtained on complex scenes clearly show...
Estimating the Scene Illumination Chromaticity by Using a Neural Network
 JOURNAL OF THE OPTICAL SOCIETY OF AMERICA A
, 2002
"... ..."