Results 1  10
of
941
Fast Bilateral Filtering for the Display of HighDynamicRange Images
, 2002
"... We present a new technique for the display of highdynamicrange images, which reduces the contrast while preserving detail. It is based on a twoscale decomposition of the image into a base layer, encoding largescale variations, and a detail layer. Only the base layer has its contrast reduced, the ..."
Abstract

Cited by 453 (10 self)
 Add to MetaCart
We present a new technique for the display of highdynamicrange images, which reduces the contrast while preserving detail. It is based on a twoscale decomposition of the image into a base layer, encoding largescale variations, and a detail layer. Only the base layer has its contrast reduced, thereby preserving detail. The base layer is obtained using an edgepreserving filter called the bilateral filter. This is a nonlinear filter, where the weight of each pixel is computed using a Gaussian in the spatial domain multiplied by an influence function in the intensity domain that decreases the weight of pixels with large intensity differences. We express bilateral filtering in the framework of robust statistics and show how it relates to anisotropic diffusion. We then accelerate bilateral filtering by using a piecewiselinear approximation in the intensity domain and appropriate subsampling. This results in a speedup of two orders of magnitude. The method is fast and requires no parameter setting.
The Jackknife and the Bootstrap for General Stationary Observations
, 1989
"... this paper we will always consider statistics TN of the form TN (X 1 ; :::; XN ) = T (ae ..."
Abstract

Cited by 414 (2 self)
 Add to MetaCart
this paper we will always consider statistics TN of the form TN (X 1 ; :::; XN ) = T (ae
Robust Anisotropic Diffusion
, 1998
"... Relations between anisotropic diffusion and robust statistics are described in this paper. Specifically, we show that anisotropic diffusion can be seen as a robust estimation procedure that estimates a piecewise smooth image from a noisy input image. The "edgestopping" function in the ani ..."
Abstract

Cited by 361 (17 self)
 Add to MetaCart
(Show Context)
Relations between anisotropic diffusion and robust statistics are described in this paper. Specifically, we show that anisotropic diffusion can be seen as a robust estimation procedure that estimates a piecewise smooth image from a noisy input image. The "edgestopping" function in the anisotropic diffusion equation is closely related to the error norm and influence function in the robust estimation framework. This connection leads to a new "edgestopping" function based on Tukey's biweight robust estimator, that preserves sharper boundaries than previous formulations and improves the automatic stopping of the diffusion. The robust statistical interpretation also provides a means for detecting the boundaries (edges) between the piecewise smooth regions in an image that has been smoothed with anisotropic diffusion. Additionally, we derive a relationship between anisotropic diffusion and regularization with line processes. Adding constraints on the spatial organization of the ...
A Fast Algorithm for the Minimum Covariance Determinant Estimator
 Technometrics
, 1998
"... The minimum covariance determinant (MCD) method of Rousseeuw (1984) is a highly robust estimator of multivariate location and scatter. Its objective is to find h observations (out of n) whose covariance matrix has the lowest determinant. Until now applications of the MCD were hampered by the comput ..."
Abstract

Cited by 346 (15 self)
 Add to MetaCart
(Show Context)
The minimum covariance determinant (MCD) method of Rousseeuw (1984) is a highly robust estimator of multivariate location and scatter. Its objective is to find h observations (out of n) whose covariance matrix has the lowest determinant. Until now applications of the MCD were hampered by the computation time of existing algorithms, which were limited to a few hundred objects in a few dimensions. We discuss two important applications of larger size: one about a production process at Philips with n = 677 objects and p = 9 variables, and a data set from astronomy with n =137,256 objects and p = 27 variables. To deal with such problems we have developed a new algorithm for the MCD, called FASTMCD. The basic ideas are an inequality involving order statistics and determinants, and techniques which we call `selective iteration' and `nested extensions'. For small data sets FASTMCD typically finds the exact MCD, whereas for larger data sets it gives more accurate results than existing algori...
Robust multiresolution estimation of parametric motion models
 Jal of Vis. Comm. and Image Representation
, 1995
"... This paper describes a method to estimate parametric motion models. Motivations for the use of such models are on one hand their efficiency, which has been demonstrated in numerous contexts such as estimation, segmentation, tracking and interpretation of motion, and on the other hand, their low comp ..."
Abstract

Cited by 329 (55 self)
 Add to MetaCart
(Show Context)
This paper describes a method to estimate parametric motion models. Motivations for the use of such models are on one hand their efficiency, which has been demonstrated in numerous contexts such as estimation, segmentation, tracking and interpretation of motion, and on the other hand, their low computational cost compared to optical flow estimation. However, it is important to have the best accuracy for the estimated parameters, and to take into account the problem of multiple motion. We have therefore developed two robust estimators in a multiresolution framework. Numerical results support this approach, as validated by the use of these algorithms on complex sequences. 1
Information fusion in biometrics
 Pattern Recognition Letters
, 2003
"... User verification systems that use a single biometric indicator often have to contend with noisy sensor data, restricted degrees of freedom, nonuniversality of the biometric trait and unacceptable error rates. Attempting to improve the performance of individual matchers in such situations may not p ..."
Abstract

Cited by 293 (17 self)
 Add to MetaCart
(Show Context)
User verification systems that use a single biometric indicator often have to contend with noisy sensor data, restricted degrees of freedom, nonuniversality of the biometric trait and unacceptable error rates. Attempting to improve the performance of individual matchers in such situations may not prove to be effective because of these inherent problems. Multibiometric systems seek to alleviate some of these drawbacks by providing multiple evidences of the same identity. These systems help achieve an increase in performance that may not be possible using a single biometric indicator. Further, multibiometric systems provide antispoofing measures by making it difficult for an intruder to spoof multiple biometric traits simultaneously. However, an effective fusion scheme is necessary to combine the information presented by multiple domain experts. This paper addresses the problem of information fusion in biometric verification systems by combining information at the matching score level. Experimental results on combining three biometric modalities (face, fingerprint and hand geometry) are presented.
Robot Pose Estimation in Unknown Environments by Matching 2D Range Scans
, 1994
"... A mobile robot exploring an unknown environment has no absolute frame of reference for its position, other than features it detects through its sensors. Using distinguishable landmarks is one possible approach, but it requires solving the object recognition problem. In particular, when the robot use ..."
Abstract

Cited by 278 (9 self)
 Add to MetaCart
(Show Context)
A mobile robot exploring an unknown environment has no absolute frame of reference for its position, other than features it detects through its sensors. Using distinguishable landmarks is one possible approach, but it requires solving the object recognition problem. In particular, when the robot uses twodimensional laser range scans for localization, it is difficult to accurately detect and localize landmarks in the environment (such as corners and occlusions) from the range scans. In this paper, we develop two new iterative algorithms to register a range scan to a previous scan so as to compute relative robot positions in an unknown environment, that avoid the above problems. The first algorithm is based on matching data points with tangent directions in two scans and minimizing a distance function in order to solve the displacementbetween the scans. The second algorithm establishes correspondences between points in the two scans and then solves the pointtopoint leastsquares probl...
The development and comparison of robust methods for estimating the fundamental matrix
 International Journal of Computer Vision
, 1997
"... Abstract. This paper has two goals. The first is to develop a variety of robust methods for the computation of the Fundamental Matrix, the calibrationfree representation of camera motion. The methods are drawn from the principal categories of robust estimators, viz. case deletion diagnostics, Mest ..."
Abstract

Cited by 266 (10 self)
 Add to MetaCart
(Show Context)
Abstract. This paper has two goals. The first is to develop a variety of robust methods for the computation of the Fundamental Matrix, the calibrationfree representation of camera motion. The methods are drawn from the principal categories of robust estimators, viz. case deletion diagnostics, Mestimators and random sampling, and the paper develops the theory required to apply them to nonlinear orthogonal regression problems. Although a considerable amount of interest has focussed on the application of robust estimation in computer vision, the relative merits of the many individual methods are unknown, leaving the potential practitioner to guess at their value. The second goal is therefore to compare and judge the methods. Comparative tests are carried out using correspondences generated both synthetically in a statistically controlled fashion and from feature matching in real imagery. In contrast with previously reported methods the goodness of fit to the synthetic observations is judged not in terms of the fit to the observations per se but in terms of fit to the ground truth. A variety of error measures are examined. The experiments allow a statistically satisfying and quasioptimal method to be synthesized, which is shown to be stable with up to 50 percent outlier contamination, and may still be used if there are more than 50 percent outliers. Performance bounds are established for the method, and a variety of robust methods to estimate the standard deviation of the error and covariance matrix of the parameters are examined. The results of the comparison have broad applicability to vision algorithms where the input data are corrupted not only by noise but also by gross outliers.
Lucas/Kanade meets Horn/Schunck: Combining local and global optic flow methods
 International Journal of Computer Vision
, 2005
"... Abstract. Differential methods belong to the most widely used techniques for optic flow computation in image sequences. They can be classified into local methods such as the Lucas–Kanade technique or Bigün’s structure tensor method, and into global methods such as the Horn/Schunck approach and its e ..."
Abstract

Cited by 229 (15 self)
 Add to MetaCart
(Show Context)
Abstract. Differential methods belong to the most widely used techniques for optic flow computation in image sequences. They can be classified into local methods such as the Lucas–Kanade technique or Bigün’s structure tensor method, and into global methods such as the Horn/Schunck approach and its extensions. Often local methods are more robust under noise, while global techniques yield dense flow fields. The goal of this paper is to contribute to a better understanding and the design of novel differential methods in four ways: (i) We juxtapose the role of smoothing/regularisation processes that are required in local and global differential methods for optic flow computation. (ii) This discussion motivates us to describe and evaluate a novel method that combines important advantages of local and global approaches: It yields dense flow fields that are robust against noise. (iii) Spatiotemporal and nonlinear extensions as well as multiresolution frameworks are presented for this hybrid method. (iv) We propose a simple confidence measure for optic flow methods that minimise energy functionals. It allows to sparsify a dense flow field gradually, depending on the reliability required for the resulting flow. Comparisons with experiments from the literature demonstrate the favourable performance of the proposed methods and the confidence measure.