Results 1  10
of
12
Estimating the Support of a HighDimensional Distribution
, 1999
"... Suppose you are given some dataset drawn from an underlying probability distribution P and you want to estimate a "simple" subset S of input space such that the probability that a test point drawn from P lies outside of S is bounded by some a priori specified between 0 and 1. We propo ..."
Abstract

Cited by 766 (29 self)
 Add to MetaCart
Suppose you are given some dataset drawn from an underlying probability distribution P and you want to estimate a "simple" subset S of input space such that the probability that a test point drawn from P lies outside of S is bounded by some a priori specified between 0 and 1. We propose a method to approach this problem by trying to estimate a function f which is positive on S and negative on the complement. The functional form of f is given by a kernel expansion in terms of a potentially small subset of the training data; it is regularized by controlling the length of the weight vector in an associated feature space. The expansion coefficients are found by solving a quadratic programming problem, which we do by carrying out sequential optimization over pairs of input patterns. We also provide a preliminary theoretical analysis of the statistical performance of our algorithm. The algorithm is a natural extension of the support vector algorithm to the case of unlabelled d...
Support Vector Method for Novelty Detection
, 2000
"... Suppose you are given some dataset drawn from an underlying probability distributionPand you want to estimate a “simple ” subsetSof input space such that the probability that a test point drawn from P lies outside of Sequals some a priori specified between0and1. We propose a m ethod to approach this ..."
Abstract

Cited by 160 (4 self)
 Add to MetaCart
Suppose you are given some dataset drawn from an underlying probability distributionPand you want to estimate a “simple ” subsetSof input space such that the probability that a test point drawn from P lies outside of Sequals some a priori specified between0and1. We propose a m ethod to approach this problem by trying to estimate a function f which is positive on S and negative on the complement. The functional form offis given by a kernel expansion in terms of a potentially small subset of the training data; it is regularized by controlling the length of the weight vector in an associated feature space. We provide a theoretical analysis of the statistical performance of our algorithm. The algorithm is a natural extension of the support vector algorithm to the case of unlabelled data.
Content based retrieval of VRML objects  an iterative and interactive approach
, 2001
"... We examine the problem of searching a database of threedimensional objects (given in VRML) for objects similar to a given object. We introduce an algorithm which is both iterative and interactive. Rather than base the search solely on geometric feature similarity, we propose letting the user influe ..."
Abstract

Cited by 114 (6 self)
 Add to MetaCart
(Show Context)
We examine the problem of searching a database of threedimensional objects (given in VRML) for objects similar to a given object. We introduce an algorithm which is both iterative and interactive. Rather than base the search solely on geometric feature similarity, we propose letting the user influence future search results by marking some of the results of the current search as `relevant' or `irrelevant', thus indicating personal preferences. A novel approach, based on SVM, is used for the adaptation of the distance measure consistently with these markings, which brings the `relevant' objects closer and pushes the `irrelevant' objects farther. We show that in practice very few iterations are needed for the system to converge well on what the user "had in mind".
Kernel Methods and Support Vector Machines
, 2003
"... Introduction Over the past ten years kernel methods such as Support Vector Machines and Gaussian Processes have become a staple for modern statistical estimation and machine learning. The groundwork for this field was laid in the second half of the 20th century by Vapnik and Chervonenkis (geometric ..."
Abstract

Cited by 92 (4 self)
 Add to MetaCart
Introduction Over the past ten years kernel methods such as Support Vector Machines and Gaussian Processes have become a staple for modern statistical estimation and machine learning. The groundwork for this field was laid in the second half of the 20th century by Vapnik and Chervonenkis (geometrical formulation of an optimal separating hyperplane, capacity measures for margin classifiers), Mangasarian (linear separation by a convex function class), Aronszajn (Reproducing Kernel Hilbert Spaces), Aizerman, Braverman, and Rozonoer (nonlinearity via kernel feature spaces), Arsenin and Tikhonov (regularization and illposed problems), and Wahba (regularization in Reproducing Kernel Hilbert Spaces). However, it took until the early 90s until positive definite kernels became a popular and viable means of estimation. Firstly this was due to the lack of su#ciently powerful hardware, since kernel methods require the computation of the socalled kernel matrix, which requires quadratic storage i
Nonparametric quantile estimation
, 2006
"... In regression, the desired estimate of yx is not always given by a conditional mean, although this is most common. Sometimes one wants to obtain a good estimate that satisfies the property that a proportion, τ, of yx, will be below the estimate. For τ = 0.5 this is an estimate of the median. What ..."
Abstract

Cited by 52 (9 self)
 Add to MetaCart
(Show Context)
In regression, the desired estimate of yx is not always given by a conditional mean, although this is most common. Sometimes one wants to obtain a good estimate that satisfies the property that a proportion, τ, of yx, will be below the estimate. For τ = 0.5 this is an estimate of the median. What might be called median regression, is subsumed under the term quantile regression. We present a nonparametric version of a quantile estimator, which can be obtained by solving a simple quadratic programming problem and provide uniform convergence statements and bounds on the quantile property of our estimator. Experimental results show the feasibility of the approach and competitiveness of our method with existing ones. We discuss several types of extensions including an approach to solve the quantile crossing problems, as well as a method to incorporate prior qualitative knowledge such as monotonicity constraints. 1.
Bayesian support vector regression using a unified loss function
 IEEE Transactions on Neural Networks
, 2004
"... In this paper, we use a unified loss function, called the soft insensitive loss function, for Bayesian support vector regression. We follow standard Gaussian processes for regression to set up the Bayesian framework, in which the unified loss function is used in the likelihood evaluation. Under this ..."
Abstract

Cited by 28 (2 self)
 Add to MetaCart
(Show Context)
In this paper, we use a unified loss function, called the soft insensitive loss function, for Bayesian support vector regression. We follow standard Gaussian processes for regression to set up the Bayesian framework, in which the unified loss function is used in the likelihood evaluation. Under this framework, the maximum a posteriori estimate of the function values corresponds to the solution of an extended support vector regression problem. The overall approach has the merits of support vector regression such as convex quadratic programming and sparsity in solution representation. It also has the advantages of Bayesian methods for model adaptation and error bars of its predictions. Experimental results on simulated and realworld data sets indicate that the approach works well even on large data sets.
Accessible Image Search
, 2009
"... There are about 8 % of men and 0.8 % of women suffering from colorblindness. We show that the existing image search techniques cannot provide satisfactory results for these users, since many images will not be well perceived by them due to the loss of color information. In this paper, we introduce a ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
There are about 8 % of men and 0.8 % of women suffering from colorblindness. We show that the existing image search techniques cannot provide satisfactory results for these users, since many images will not be well perceived by them due to the loss of color information. In this paper, we introduce a scheme named Accessible Image Search (AIS) to accommodate these users. Different from the general image search scheme that aims at returning more relevant results, AIS further takes into account the colorblind accessibilities of the returned results, i.e., the image qualities in the eyes of colorblind users. The scheme includes two components: accessibility assessment and accessibility improvement. For accessibility assessment, we introduce an analysisbased method and a learningbased method. Based on the measured accessibility scores, different reranking methods can be performed to prioritize the images with high accessibilities. In accessibility improvement component, we propose an efficient recoloring algorithm to modify the colors of the images such that they can be better perceived by colorblind users. We also propose the Accessibility Average Precision (AAP) for AIS as a complementary performance evaluation measure to the conventional relevancebased evaluation methods. Experimental results with more than 60,000 images and 20 anonymous colorblind users demonstrate the effectiveness and usefulness of the proposed scheme.
A review of RKHS methods in machine learning
, 2006
"... Over the last ten years, estimation and learning methods utilizing positive definite kernels have become rather popular, particularly in machine learning. Since these methods have a stronger mathematical slant than earlier machine learning methods (e.g., neural networks), there is also ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Over the last ten years, estimation and learning methods utilizing positive definite kernels have become rather popular, particularly in machine learning. Since these methods have a stronger mathematical slant than earlier machine learning methods (e.g., neural networks), there is also
MSc Computational Finance Acknowledgements
, 2008
"... I would like to express my gratitude to my supervisor Dr. Ronq Qu for her invaluable assistance, support and guidance throughout this work. I am also thankful to all of my professors for all the knowledge and inspiration I have gained during the last year. I feel much indebted to my extended family ..."
Abstract
 Add to MetaCart
I would like to express my gratitude to my supervisor Dr. Ronq Qu for her invaluable assistance, support and guidance throughout this work. I am also thankful to all of my professors for all the knowledge and inspiration I have gained during the last year. I feel much indebted to my extended family and especially to my parents for their love, care and support throughout my life. I also wish to thank all my friends for their encouragement and the fruitful discussions during my studies. Finally, I have no words to describe my appreciation to my late grandmother, Mrs. Astero Kountouri, for everything she has done for me. To her I dedicate this work. i Interest rate forecasting is one of the most challenging tasks in modern finance and economics. Several studies, examining different factors and statistical models, have been employed, however they often failed to beat a simple random walk. This study suggests the use of nonlinear, nonparametric models, namely Support Vector Machines (SVM) and Artificial Neural Networks (ANN). The methodology employed uses interest rate levels