Results 11  20
of
57
Robust Computer Vision through Kernel Density Estimation
 In 7th European Conf. on Computer Vision
, 2002
"... Two new techniques based on nonparametric estimation of probability densities are introduced which improve on the performance of equivalent robust methods currently employed in computer vision. The first technique draws from the projection pursuit paradigm in statistics, and carries out regressio ..."
Abstract

Cited by 29 (2 self)
 Add to MetaCart
(Show Context)
Two new techniques based on nonparametric estimation of probability densities are introduced which improve on the performance of equivalent robust methods currently employed in computer vision. The first technique draws from the projection pursuit paradigm in statistics, and carries out regression Mestimation with a weak dependence on the accuracy of the scale estimate. The second technique exploits the properties of the multivariate adaptive mean shift, and accomplishes the fusion of uncertain measurements arising from an unknown number of sources. As an example, the two techniques are extensively used in an algorithm for the recovery of multiple structures from heavily corrupted data.
Dynamic Coupled Component Analysis
, 2001
"... We present a method for simultaneously learning linear models of multiple high dimensional data sets and the dependencies between them. For example, we learn asymmetrically coupled linear models for the faces of two dierent people and show how these models can be used to animate one face given a vid ..."
Abstract

Cited by 25 (6 self)
 Add to MetaCart
(Show Context)
We present a method for simultaneously learning linear models of multiple high dimensional data sets and the dependencies between them. For example, we learn asymmetrically coupled linear models for the faces of two dierent people and show how these models can be used to animate one face given a video sequence of the other. We pose the problem as a form of Asymmetric Coupled Component Analysis (ACCA) in which we simultaneously learn the subspaces for reducing the dimensionality of each dataset while coupling the parameters of the low dimensional representations.
Robust Regression for Data with Multiple Structures
 In 2001 IEEE Conference on Computer Vision and Pattern Recognition, volume I
, 2001
"... In many vision problems (e.g., stereo, motion) multiple structures can occur in the data, in which case several instances of the same model need to be recovered from a single data set. However, once the measurement noise becomes significantly large relative to the separation between the structures, ..."
Abstract

Cited by 24 (3 self)
 Add to MetaCart
(Show Context)
In many vision problems (e.g., stereo, motion) multiple structures can occur in the data, in which case several instances of the same model need to be recovered from a single data set. However, once the measurement noise becomes significantly large relative to the separation between the structures, the robust statistical methods commonly used in the vision community tend to fail. In this paper, we show that all these techniques are special cases of the general class of Mestimators with auxiliary scale, and explain their failure in the presence of noisy multiple structures. To be able to cope with data containing multiple structures the techniques innate to vision (Hough and RANSAC) should be combined with the robust methods customary in statistics. The implications of our analysis are illustrated by introducing a simple procedure for 2D multistructured data problematic for all known current techniques. 1.
The modified pbMestimator method and a runtime analysis technique for the ransac family
 in Proc. IEEE Conf. on Computer Vision and Pattern Recognition
, 2005
"... Robust regression techniques are used today in many computer vision algorithms. Chen and Meer recently presented a new robust regression technique named the projection based Mestimator. Unlike other methods in the RANSAC family of techniques, where performance depends on a user supplied scale param ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
(Show Context)
Robust regression techniques are used today in many computer vision algorithms. Chen and Meer recently presented a new robust regression technique named the projection based Mestimator. Unlike other methods in the RANSAC family of techniques, where performance depends on a user supplied scale parameter, in the pbMestimator technique this scale parameter is estimated automatically from the data using kernel smoothing density estimation. In this work we improve the performance of the pbMestimator by changing its cost function. Replacing the cost function of the pbMestimator with the changed one yields the modified pbMestimator. The cost function of the modified pbMestimator is more stable relative to the scale parameter and is also a better classifier. Thus we get a more robust and effective technique. A new general method to estimate the runtime of robust regression algorithms is proposed. Using it we show, that the modified pbMestimator runs 23 times faster than the pbMestimator. Experimental results of fundamental matrix estimation are presented demonstrating the correctness of the proposed analysis method and the advantages of the modified pbMestimator. 1
Music: Approximate matching algorithms for music information retrieval using vocal input
 In Proceedings of the eleventh ACM international conference on Multimedia
"... Effective use of multimedia collections requires efficient and intuitive methods of searching and browsing. This work considers databases which store music and explores how these may best be searched by providing input queries in some musical form. For the average person, humming several notes of t ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
(Show Context)
Effective use of multimedia collections requires efficient and intuitive methods of searching and browsing. This work considers databases which store music and explores how these may best be searched by providing input queries in some musical form. For the average person, humming several notes of the desired melody is the most straightforward method for providing this input, but such input is very likely to contain several errors. Previously proposed implementations of socalled querybyhumming systems are effective only when the number of input errors is small. We conducted experiments which revealed that the expected error rate for user queries is much higher than existing algorithms can tolerate. We then developed algorithms based on approximate matching techniques which deliver much improved results when comparing errorfilled vocal user queries against a music collection.
Sovereigns, Upstream Capital Flows, and Global Imbalances
, 2011
"... The paper presents new stylized facts on the direction of capital flows. We find (i) international capital flows net of government debt and/or official aid are positively correlated with growth; (ii) sovereign debt flows are negatively correlated with growth only if debt is financed by another sover ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
The paper presents new stylized facts on the direction of capital flows. We find (i) international capital flows net of government debt and/or official aid are positively correlated with growth; (ii) sovereign debt flows are negatively correlated with growth only if debt is financed by another sovereign; (iii) public savings are robustly positively correlated with growth as opposed to private savings. Sovereign to sovereign transactions can fully account for upstream capital flows and global imbalances. These empirical facts contradict the conventional wisdom and constitute a challenge for existing theories. JEL Classification: F21, F41, O1
Managerial objectives, the Rrating puzzle, and the production of violent films
 Journal of Business
, 2004
"... The purpose of this article is threefold. First, we provide projectbased evidence consistent with risk averse and revenue maximizing behavior on the part of executives in charge of large projects. Second, we partially ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
The purpose of this article is threefold. First, we provide projectbased evidence consistent with risk averse and revenue maximizing behavior on the part of executives in charge of large projects. Second, we partially
Heteroscedastic Hough Transform (HtHT): An efficient method for robust line fitting in the `Errors in the Variables' problem
, 2000
"... this paper we present an efficient method for robust line fitting in the heteroscedastic `errors in the variables' problem, with correlated noise. It is assumed that the covariance matrix associated with each data point is known. The method suggested is easy to implement, fast to compute, and p ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
this paper we present an efficient method for robust line fitting in the heteroscedastic `errors in the variables' problem, with correlated noise. It is assumed that the covariance matrix associated with each data point is known. The method suggested is easy to implement, fast to compute, and provides a systematic solution to this important practical problem. The organization of the paper is as follows. In section 2 the problem is defined and formulated as a global optimization problem and the general approach to solving it is outlined. In section 3 it is shown that the objective function can be simplified and has a special structure. It is further shown that this special structure leads to an elegant, efficient computational solution. Experimental results are presented in section 4. In section 5 an alternative definition of the problem is considered and limitations of the method are discussed
of LaborSocial and Economic Determinants of Turkish Voter Choice in the 1995 Parliamentary Election
, 2007
"... Any opinions expressed here are those of the author(s) and not those of the institute. Research disseminated by IZA may include views on policy, but the institute itself takes no institutional policy positions. The Institute for the Study of Labor (IZA) in Bonn is a local and virtual international r ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
Any opinions expressed here are those of the author(s) and not those of the institute. Research disseminated by IZA may include views on policy, but the institute itself takes no institutional policy positions. The Institute for the Study of Labor (IZA) in Bonn is a local and virtual international research center and a place of communication between science, politics and business. IZA is an independent nonprofit company supported by Deutsche Post World Net. The center is associated with the University of Bonn and offers a stimulating research environment through its research networks, research support, and visitors and doctoral programs. IZA engages in (i) original and internationally competitive research in all fields of labor economics, (ii) development of policy concepts, and (iii) dissemination of research results and concepts to the interested public. IZA Discussion Papers often represent preliminary work and are circulated to encourage discussion. Citation of such a paper should account for its provisional character. A revised version may be available directly from the author. IZA Discussion Paper No. 2881
Guaranteed Convergence of the Hough Transform
 IN VISION GEOMETRY III, PROC. SPIE 2356
, 1994
"... The straightline Hough Transform using normal parameterization with a continuous voting kernel is considered. It transforms the collinearity detection problem to a problem of finding the global maximum of a two dimensional function above a domain in the parameter space. The principle is similar to ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
The straightline Hough Transform using normal parameterization with a continuous voting kernel is considered. It transforms the collinearity detection problem to a problem of finding the global maximum of a two dimensional function above a domain in the parameter space. The principle is similar to robust regression using fixed scale Mestimation. Unlike standard Mestimation procedures the Hough Transform does not rely on a good initial estimate of the line parameters: The global optimization problem is approached by exhaustive search on a grid that is usually as fine as computationally feasible. The global maximum of a general function above a bounded domain cannot be found by a finite number of function evaluations. Only if sufficient apriori knowledge about the smoothness of the objective function is available, convergence to the global maximum can be guaranteed. The extraction of apriori information and its efficient use are the main challenges in real global optimization problems. Convergence in the Hough Transform is the ability to ensure that the global maximum is in the immediate neighborhood of the maximal grid point. More than thirty years after Hough patented the basic algorithm, it is still not clear how fine should the parameter space quantization be in order not to miss the true maximum. In this paper conditions for the convergence of the Hough Transform to the global maximum are derived. The necessary constraints on the variability of the objective (Hough) function are obtained by using the saturated parabolic voting kernel and by defining an image model with several application dependent parameters. Random errors in the location of edge points and background noise are allowed in the model and lead to statistical convergence guarantees. Significant...