Results 1  10
of
78
Entropy minimization for shadow removal
 Int. J. Computer Vision
"... Recently, a method for removing shadows from colour images was developed [Finlayson, Hordley, Lu, and Drew, PAMI2006] that relies upon finding a special direction in a 2D chromaticity feature space. This “invariant direction ” is that for which particular colour features, when projected into 1D, pro ..."
Abstract

Cited by 32 (1 self)
 Add to MetaCart
(Show Context)
Recently, a method for removing shadows from colour images was developed [Finlayson, Hordley, Lu, and Drew, PAMI2006] that relies upon finding a special direction in a 2D chromaticity feature space. This “invariant direction ” is that for which particular colour features, when projected into 1D, produce a greyscale image which is approximately invariant to intensity and colour of scene illumination. Thus shadows, which are in essence a particular type of lighting, are greatly attenuated. The main approach to finding this special angle is a camera calibration: a colour target is imaged under many different lights, and the direction that best makes colour patch images equal across illuminants is the invariant direction. Here, we take a different approach. In this work, instead of a camera calibration we aim at finding the invariant direction from evidence in the colour image itself. Specifically, we recognize that producing a 1D projection in the correct invariant direction will result in a 1D distribution of pixel values that have smaller entropy than projecting in the wrong direction. The reason is that the correct projection results in a probability distribution spike, for pixels all the same except differing by the lighting that produced their observed RGB values and therefore lying along a line with orientation equal to the invariant direction. Hence we seek that projection which produces a type of intrinsic, independent of lighting reflectanceinformation only image by minimizing entropy, and from there go on to remove shadows as previously. To be able to develop an effective description of the entropyminimization task, we go over to the quadratic entropy, rather than Shannon’s definition. Replacing the observed pixels with a kernel density probability distribution, the quadratic entropy can be written as a very simple formulation, and can
Families of Alpha Beta and GammaDivergences: Flexible and Robust Measures of Similarities
, 2010
"... ..."
Information Measures in ScaleSpaces
 IEEE TRANS. INFORMATION THEORY
, 1999
"... This paper investigates Rényi's generalized entropies under linear and nonlinear scalespace evolutions of images. Scalespaces are useful computer vision concepts for both scale analysis and image restoration. We regard images as densities and prove monotony and smoothness properties for the g ..."
Abstract

Cited by 27 (4 self)
 Add to MetaCart
This paper investigates Rényi's generalized entropies under linear and nonlinear scalespace evolutions of images. Scalespaces are useful computer vision concepts for both scale analysis and image restoration. We regard images as densities and prove monotony and smoothness properties for the generalized entropies. The scalespace extended generalized entropies are applied to global scale selection and size estimations. Finally, we introduce an entropybased fingerprint description for textures.
Sorting and Searching in the Presence of Memory Faults (without Redundancy)
 Proc. 36th ACM Symposium on Theory of Computing (STOC’04
, 2004
"... We investigate the design of algorithms resilient to memory faults, i.e., algorithms that, despite the corruption of some memory values during their execution, are able to produce a correct output on the set of uncorrupted values. In this framework, we consider two fundamental problems: sorting and ..."
Abstract

Cited by 21 (4 self)
 Add to MetaCart
We investigate the design of algorithms resilient to memory faults, i.e., algorithms that, despite the corruption of some memory values during their execution, are able to produce a correct output on the set of uncorrupted values. In this framework, we consider two fundamental problems: sorting and searching. In particular, we prove that any O(n log n) comparisonbased sorting algorithm can tolerate at most O((n log n) ) memory faults. Furthermore, we present one comparisonbased sorting algorithm with optimal space and running time that is resilient to O((n log n) ) faults. We also prove polylogarithmic lower and upper bounds on faulttolerant searching.
Rényi Entropies for Free Field Theories
"... Rényi entropies Sq are useful measures of quantum entanglement; they can be calculated from traces of the reduced density matrix raised to power q, with q ≥ 0. For (d + 1)dimensional conformal field theories, the Rényi entropies across Sd−1 may be extracted from the thermal partition functions of ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
(Show Context)
Rényi entropies Sq are useful measures of quantum entanglement; they can be calculated from traces of the reduced density matrix raised to power q, with q ≥ 0. For (d + 1)dimensional conformal field theories, the Rényi entropies across Sd−1 may be extracted from the thermal partition functions of these theories on either (d+1)dimensional de Sitter space or R×Hd, where Hd is the ddimensional hyperbolic space. These thermal partition functions can in turn be expressed as path integrals on branched coverings of the (d+ 1)dimensional sphere and S1 × Hd, respectively. We calculate the Rényi entropies of free massless scalars and fermions in d = 2, and show how using zetafunction regularization one finds agreement between the calculations on the branched coverings of S3 and on S1 × H2. Analogous calculations for massive free fields provide monotonic interpolating functions between the Rényi entropies at the Gaussian and the trivial fixed points. Finally, we discuss similar Rényi entropy calculations in d> 2.
Toward Approximate Planning in Very Large Stochastic Domains
 In Proceedings of the AAAI Spring Symposium on Decision Theoretic Planning
, 1994
"... In this paper we extend previous work on approximate planning in large stochastic domains by adding the ability to plan in automaticallygenerated abstract world views. The dynamics of the domain are represented compositionally using a Bayesian network. Sensitivity analysis is performed on the networ ..."
Abstract

Cited by 17 (6 self)
 Add to MetaCart
In this paper we extend previous work on approximate planning in large stochastic domains by adding the ability to plan in automaticallygenerated abstract world views. The dynamics of the domain are represented compositionally using a Bayesian network. Sensitivity analysis is performed on the network to identify the aspects of the world upon which success is most highly dependent. An abstract world model is constructed by including only the most relevant aspects of the world. The world view can be refined over time, making the overall planner behave in most cases like an anytime algorithm. This paper is a preliminary report on this ongoing work. 1 Introduction Many realworld domains cannot be effectively modeled deterministically: the effects of actions vary at random, but with some characterizable distribution. In stochastic domains such as these, a classical plan consisting of a sequence of actions is of little or no use because the appropriate action to take in later steps will d...
Optimal resilient sorting and searching in the presence of memory faults
 IN PROC. 33RD INTERNATIONAL COLLOQUIUM ON AUTOMATA, LANGUAGES AND PROGRAMMING, VOLUME 4051 OF LECTURE NOTES IN COMPUTER SCIENCE
, 2006
"... We investigate the problem of reliable computation in the presence of faults that may arbitrarily corrupt memory locations. In this framework, we consider the problems of sorting and searching in optimal time while tolerating the largest possible number of memory faults. In particular, we design an ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
We investigate the problem of reliable computation in the presence of faults that may arbitrarily corrupt memory locations. In this framework, we consider the problems of sorting and searching in optimal time while tolerating the largest possible number of memory faults. In particular, we design an O(n log n) time sorting algorithm that can optimally tolerate up to O ( √ n log n) memory faults. In the special case of integer sorting, we present an algorithm with linear expected running time that can tolerate O ( √ n) faults. We also present a randomized searching algorithm that can optimally tolerate up to O(log n) memory faults in O(log n) expected time, and an almost optimal deterministic searching algorithm that can tolerate O((log n) 1−ǫ) faults, for any small positive constant ǫ, in O(log n) worstcase time. All these results improve over previous bounds.
Comparison Of Entropy And Mean Square Error Criteria In Adaptive System Training Using Higher Order Statistics
 Proceedings of the Second International Workshop on Independent Component Analysis and Blind Signal Separation
, 2000
"... The errorentropyminimization approach in adaptive system training is investigated. The effect of Parzen windowing on the location of the global minimum of entropy has been investigated. An analytical proof that shows the global minimum of the entropy is a local minimum, possibly the global minimum ..."
Abstract

Cited by 15 (7 self)
 Add to MetaCart
(Show Context)
The errorentropyminimization approach in adaptive system training is investigated. The effect of Parzen windowing on the location of the global minimum of entropy has been investigated. An analytical proof that shows the global minimum of the entropy is a local minimum, possibly the global minimum, of the nonparametrically estimated entropy using Parzen windowing with Gaussian kernels. The performances of errorentropyminimization and the meansquareerrorminimization criteria are compared in shortterm prediction of a chaotic time series. Statistical behavior of the estimation errors and the higher order central moments of the time series data and its predictions are utilized as the comparison criteria. 1. INTRODUCTION Starting with the early work of Wiener [1] on adaptive filters, mean square error (MSE) has been almost exclusively employed in the training of all adaptive systems including artificial neural networks. There were mainly two reasons lying behind this choice: Analyti...