Results 1  10
of
66,485
Efficient SVM training using lowrank kernel representations
 Journal of Machine Learning Research
, 2001
"... SVM training is a convex optimization problem which scales with the training set size rather than the feature space dimension. While this is usually considered to be a desired quality, in large scale problems it may cause training to be impractical. The common techniques to handle this difficulty ba ..."
Abstract

Cited by 244 (3 self)
 Add to MetaCart
SVM training is a convex optimization problem which scales with the training set size rather than the feature space dimension. While this is usually considered to be a desired quality, in large scale problems it may cause training to be impractical. The common techniques to handle this difficulty
An Accelerated MDM Algorithm for SVM Training
"... Abstract. In this work we will propose an acceleration procedure for the Mitchell–Demyanov–Malozemov (MDM) algorithm (a fast geometric algorithm for SVM construction) that may yield quite large training savings. While decomposition algorithms such as SVMLight or SMO are usually the SVM methods of ch ..."
Abstract
 Add to MetaCart
Abstract. In this work we will propose an acceleration procedure for the Mitchell–Demyanov–Malozemov (MDM) algorithm (a fast geometric algorithm for SVM construction) that may yield quite large training savings. While decomposition algorithms such as SVMLight or SMO are usually the SVM methods
Multicore Structural SVM Training
"... Abstract. Many problems in natural language processing and computer vision can be framed as structured prediction problems. Structural support vector machines (SVM) is a popular approach for training structured predictors, where learning is framed as an optimization problem. Most structural SVM sol ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract. Many problems in natural language processing and computer vision can be framed as structured prediction problems. Structural support vector machines (SVM) is a popular approach for training structured predictors, where learning is framed as an optimization problem. Most structural SVM
A Fast Revised Simplex Method for SVM Training
"... Active set methods for training the Support Vector Machines (SVM) are advantageous since they enable incremental training and, as we show in this research, do not exhibit exponentially increasing training times commonly associated with the decomposition methods as the SVM training parameter, C, is i ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Active set methods for training the Support Vector Machines (SVM) are advantageous since they enable incremental training and, as we show in this research, do not exhibit exponentially increasing training times commonly associated with the decomposition methods as the SVM training parameter, C
Iterative Inner Solvers for Revised Simplex SVM Training
"... Support Vector Machine (SVM) training is equivalent to solving a large constrained optimization problem. Much work has been spent on decompositional optimization methods for this problem, but nondecompositional approaches have only recently regained attention. Notably, Sentelle’s work in applying R ..."
Abstract
 Add to MetaCart
Support Vector Machine (SVM) training is equivalent to solving a large constrained optimization problem. Much work has been spent on decompositional optimization methods for this problem, but nondecompositional approaches have only recently regained attention. Notably, Sentelle’s work in applying
Rosen’s Projection Method for SVM Training
"... Abstract. In this work we will give explicit formulae for the application of Rosen’s gradient projection method to SVM training that leads to a very simple implementation. We shall experimentally show that the method provides good descent directions that result in less training iterations, particula ..."
Abstract
 Add to MetaCart
Abstract. In this work we will give explicit formulae for the application of Rosen’s gradient projection method to SVM training that leads to a very simple implementation. We shall experimentally show that the method provides good descent directions that result in less training iterations
Making LargeScale SVM Learning Practical
, 1998
"... Training a support vector machine (SVM) leads to a quadratic optimization problem with bound constraints and one linear equality constraint. Despite the fact that this type of problem is well understood, there are many issues to be considered in designing an SVM learner. In particular, for large lea ..."
Abstract

Cited by 1846 (17 self)
 Add to MetaCart
Training a support vector machine (SVM) leads to a quadratic optimization problem with bound constraints and one linear equality constraint. Despite the fact that this type of problem is well understood, there are many issues to be considered in designing an SVM learner. In particular, for large
Core vector machines: Fast SVM training on very large data sets
 Journal of Machine Learning Research
, 2005
"... Standard SVM training has O(m 3) time and O(m 2) space complexities, where m is the training set size. It is thus computationally infeasible on very large data sets. By observing that practical SVM implementations only approximate the optimal solution by an iterative strategy, we scale up kernel met ..."
Abstract

Cited by 133 (15 self)
 Add to MetaCart
Standard SVM training has O(m 3) time and O(m 2) space complexities, where m is the training set size. It is thus computationally infeasible on very large data sets. By observing that practical SVM implementations only approximate the optimal solution by an iterative strategy, we scale up kernel
Efficient Revised Simplex Method for SVM Training
"... Abstract — Existing active set methods reported in the literature for support vector machine (SVM) training must contend with singularities when solving for the search direction. When a singularity is encountered, an infinite descent direction can be carefully chosen that avoids cycling and allows t ..."
Abstract
 Add to MetaCart
Abstract — Existing active set methods reported in the literature for support vector machine (SVM) training must contend with singularities when solving for the search direction. When a singularity is encountered, an infinite descent direction can be carefully chosen that avoids cycling and allows
Pegasos: Primal Estimated subgradient solver for SVM
"... We describe and analyze a simple and effective stochastic subgradient descent algorithm for solving the optimization problem cast by Support Vector Machines (SVM). We prove that the number of iterations required to obtain a solution of accuracy ɛ is Õ(1/ɛ), where each iteration operates on a singl ..."
Abstract

Cited by 531 (21 self)
 Add to MetaCart
single training example. In contrast, previous analyses of stochastic gradient descent methods for SVMs require Ω(1/ɛ2) iterations. As in previously devised SVM solvers, the number of iterations also scales linearly with 1/λ, where λ is the regularization parameter of SVM. For a linear kernel, the total
Results 1  10
of
66,485