Results 1  10
of
188
Gaussian process dynamical models for human motion
 IEEE TRANS. PATTERN ANAL. MACHINE INTELL
, 2008
"... We introduce Gaussian process dynamical models (GPDMs) for nonlinear time series analysis, with applications to learning models of human pose and motion from highdimensional motion capture data. A GPDM is a latent variable model. It comprises a lowdimensional latent space with associated dynamics, ..."
Abstract

Cited by 158 (5 self)
 Add to MetaCart
(Show Context)
We introduce Gaussian process dynamical models (GPDMs) for nonlinear time series analysis, with applications to learning models of human pose and motion from highdimensional motion capture data. A GPDM is a latent variable model. It comprises a lowdimensional latent space with associated dynamics, as well as a map from the latent space to an observation space. We marginalize out the model parameters in closed form by using Gaussian process priors for both the dynamical and the observation mappings. This results in a nonparametric model for dynamical systems that accounts for uncertainty in the model. We demonstrate the approach and compare four learning algorithms on human motion capture data, in which each pose is 50dimensional. Despite the use of small data sets, the GPDM learns an effective representation of the nonlinear dynamics in these spaces.
A Comparison of Computational Color Constancy Algorithms  Part I: Methodology and Experiments with Synthesized Data
, 2002
"... We introduce a context for testing computational color constancy, specify our approach to the implementation of a number of the leading algorithms, and report the results of three experiments using synthesized data. Experiments using synthesized data are important because the ground truth is known, ..."
Abstract

Cited by 129 (8 self)
 Add to MetaCart
We introduce a context for testing computational color constancy, specify our approach to the implementation of a number of the leading algorithms, and report the results of three experiments using synthesized data. Experiments using synthesized data are important because the ground truth is known, possible confounds due to camera characterization and preprocessing are absent, and various factors affecting color constancy can be efficiently investigated because they can be manipulated individually and precisely.
EdgeBased Color Constancy
, 2007
"... Color constancy is the ability to measure colors of objects independent of the color of the light source. A wellknown color constancy method is based on the GreyWorld assumption which assumes that the average reflectance of surfaces in the world is achromatic. In this article, we propose a new hyp ..."
Abstract

Cited by 77 (10 self)
 Add to MetaCart
(Show Context)
Color constancy is the ability to measure colors of objects independent of the color of the light source. A wellknown color constancy method is based on the GreyWorld assumption which assumes that the average reflectance of surfaces in the world is achromatic. In this article, we propose a new hypothesis for color constancy namely the GreyEdge hypothesis, which assumes that the average edge difference in a scene is achromatic. Based on this hypothesis, we propose an algorithm for color constancy. Contrary to existing color constancy algorithms, which are computed from the zeroorder structure of images, our method is based on the derivative structure of images. Furthermore, we propose a framework which unifies a variety of known (GreyWorld, maxRGB, Minkowski norm) and the newly proposed GreyEdge and higherorder GreyEdge algorithms. The quality of the various instantiations of the framework is tested and compared to the stateoftheart color constancy methods on two large data sets of images recording objects under a large number of different light sources. The experiments show that the proposed color constancy algorithms obtain comparable results as the stateoftheart color constancy methods with the merit of being computationally more efficient.
Bayesian models of cognition
"... For over 200 years, philosophers and mathematicians have been using probability theory to describe human cognition. While the theory of probabilities was first developed as a means of analyzing games of chance, it quickly took on a larger and deeper significance as a formal account of how rational a ..."
Abstract

Cited by 54 (2 self)
 Add to MetaCart
(Show Context)
For over 200 years, philosophers and mathematicians have been using probability theory to describe human cognition. While the theory of probabilities was first developed as a means of analyzing games of chance, it quickly took on a larger and deeper significance as a formal account of how rational agents should reason in situations of uncertainty
Color constancy using natural image statistics
 in Proc.of theIEEEConf. on Computer Vision and Pattern Recognition
, 2007
"... Abstract—Existing color constancy methods are all based on specific assumptions such as the spatial and spectral characteristics of images. As a consequence, no algorithm can be considered as universal. However, with the large variety of available methods, the question is how to select the method th ..."
Abstract

Cited by 47 (5 self)
 Add to MetaCart
(Show Context)
Abstract—Existing color constancy methods are all based on specific assumptions such as the spatial and spectral characteristics of images. As a consequence, no algorithm can be considered as universal. However, with the large variety of available methods, the question is how to select the method that performs best for a specific image. To achieve selection and combining of color constancy algorithms, in this paper, natural image statistics are used to identify the most important characteristics of color images. Then, based on these image characteristics, the proper color constancy algorithm (or best combination of algorithms) is selected for a specific image. To capture the image characteristics, the Weibull parameterization (e.g. grain size and contrast) is used. It is shown that the Weibull parameterization is related to the image attributes to which the used color constancy methods are sensitive to. A MoGclassifier is used to learn the correlation and weighting between the Weibullparameters and the image attributes (number of edges, amount of texture and SNR). The output of the classifier is the selection of the best performing color constancy method for a certain image. Experimental results show a large improvement over stateoftheart single algorithms. On a data set consisting of more than 11,000 images, an increase in color constancy performance up to 20 % (median angular error) can be obtained compared to the bestperforming single algorithm. Further, it is shown that for certain scene categories, one specific color constancy algorithm can be used instead of the classifier considering several algorithms.
The influence of shape on the perception of material reflectance
 ACM Transactions on Graphics
, 2007
"... Figure 1: The tesselated spheres in the left image are rendered with two different types of a blue plastic BRDF, yet they are perceived as made from the same material. The objects in the right image are rendered with an identical blue plastic BRDF, yet their appearance is very different. Visual obse ..."
Abstract

Cited by 44 (2 self)
 Add to MetaCart
Figure 1: The tesselated spheres in the left image are rendered with two different types of a blue plastic BRDF, yet they are perceived as made from the same material. The objects in the right image are rendered with an identical blue plastic BRDF, yet their appearance is very different. Visual observation is our principal source of information in determining the nature of objects, including shape, material or roughness. The physiological and cognitive processes that resolve visual input into an estimate of the material of an object are influenced by the illumination and the shape of the object. This affects our ability to select materials by observing them on a pointlit sphere, as is common in current 3D modeling applications. In this paper we present an exploratory psychophysical experiment to study various influences on material discrimination in a realistic setting. The resulting data set is analyzed using a wide range of statistical techniques. Analysis of variance is used to estimate the magnitude of the influence of geometry, and fitted psychometric functions produce significantly diverse material discrimination thresholds across different shapes and materials. Suggested improvements to traditional material pickers include direct visualization on the target object, environment illumination, and the use of discrimination thresholds as a step size for parameter adjustments.
Color Constancy in the Nearly Natural Image. 2. Achromatic loci
 Journal of the Optical Society of America A
, 1998
"... This paper presents experiments that measure successive constancy under similarly natural conditions ..."
Abstract

Cited by 43 (11 self)
 Add to MetaCart
This paper presents experiments that measure successive constancy under similarly natural conditions
Bayesian Color Constancy Revisited
"... Computational color constancy is the task of estimating the true reflectances of visible surfaces in an image. In this paper we follow a line of research that assumes uniform illumination of a scene, and that the principal step in estimating reflectances is the estimation of the scene illuminant. We ..."
Abstract

Cited by 38 (0 self)
 Add to MetaCart
(Show Context)
Computational color constancy is the task of estimating the true reflectances of visible surfaces in an image. In this paper we follow a line of research that assumes uniform illumination of a scene, and that the principal step in estimating reflectances is the estimation of the scene illuminant. We review recent approaches to illuminant estimation, firstly those based on formulae for normalisation of the reflectance distribution in an image — socalled greyworld algorithms, and those based on a Bayesian formulation of image formation. In evaluating these previous approaches we introduce a new tool in the form of a database of 568 highquality, indoor and outdoor images, accurately labelled with illuminant, and preserved in their raw form, free of correction or normalisation. This has enabled us to establish several properties experimentally. Firstly automatic selection of greyworld algorithms according to image properties is not nearly so effective as has been thought. Secondly, it is shown that Bayesian illuminant estimation is significantly improved by the improved accuracy of priors for illuminant and reflectance that are obtained from the new dataset. 1.
Estimating the Scene Illumination Chromaticity by Using a Neural Network
 JOURNAL OF THE OPTICAL SOCIETY OF AMERICA A
, 2002
"... ..."