Results

**1 - 4**of**4**### Stochastic Optimization for Kernel PCA∗

"... Kernel Principal Component Analysis (PCA) is a popular ex-tension of PCA which is able to find nonlinear patterns from data. However, the application of kernel PCA to large-scale problems remains a big challenge, due to its quadratic space complexity and cubic time complexity in the number of ex-amp ..."

Abstract
- Add to MetaCart

Kernel Principal Component Analysis (PCA) is a popular ex-tension of PCA which is able to find nonlinear patterns from data. However, the application of kernel PCA to large-scale problems remains a big challenge, due to its quadratic space complexity and cubic time complexity in the number of ex-amples. To address this limitation, we utilize techniques from stochastic optimization to solve kernel PCA with linear s-pace and time complexities per iteration. Specifically, we for-mulate it as a stochastic composite optimization problem, where a nuclear norm regularizer is introduced to promote low-rankness, and then develop a simple algorithm based on stochastic proximal gradient descent. During the optimization process, the proposed algorithm always maintains a low-rank factorization of iterates that can be conveniently held in mem-ory. Compared to previous iterative approaches, a remarkable property of our algorithm is that it is equipped with an ex-plicit rate of convergence. Theoretical analysis shows that the solution of our algorithm converges to the optimal one at an O(1/T) rate, where T is the number of iterations.

### A STOCHASTIC FORWARD-BACKWARD SPLITTING METHOD FOR SOLVING MONOTONE INCLUSIONS IN HILBERT SPACES

"... ar ..."

(Show Context)
### RAPID: Rapidly Accelerated Proximal Gradient Algorithms for Convex Minimization

"... In this paper, we propose a new algorithm to speed-up the convergence of accel-erated proximal gradient (APG) methods. In order to minimize a convex function f(x), our algorithm introduces a simple line search step after each proximal gra-dient step in APG so that a biconvex function f(θx) is minimi ..."

Abstract
- Add to MetaCart

(Show Context)
In this paper, we propose a new algorithm to speed-up the convergence of accel-erated proximal gradient (APG) methods. In order to minimize a convex function f(x), our algorithm introduces a simple line search step after each proximal gra-dient step in APG so that a biconvex function f(θx) is minimized over scalar variable θ> 0 while fixing variable x. We propose two new ways of constructing the auxiliary variables in APG based on the intermediate solutions of the proxi-mal gradient and the line search steps. We prove that at arbitrary iteration step t(t ≥ 1), our algorithm can achieve a smaller upper-bound for the gap between the current and optimal objective values than those in the traditional APG methods such as FISTA [4], making it converge faster in practice. In fact, our algorithm can be potentially applied to many important convex optimization problems, such as sparse linear regression and kernel SVMs. Our experimental results clearly demonstrate that our algorithm converges faster than APG in all of the applica-tions above, even comparable to some sophisticated solvers. 1