Results 1 
7 of
7
Normalized online learning
"... We introduce online learning algorithms which are independent of feature scales, proving regret bounds dependent on the ratio of scales existent in the data rather than the absolute scale. This has several useful effects: there is no need to prenormalize data, the testtime and testspace complexity ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
We introduce online learning algorithms which are independent of feature scales, proving regret bounds dependent on the ratio of scales existent in the data rather than the absolute scale. This has several useful effects: there is no need to prenormalize data, the testtime and testspace complexity are reduced, and the algorithms are more robust. 1
Dimensionfree exponentiated gradient
 In NIPS
, 2013
"... I present a new online learning algorithm that extends the exponentiated gradient framework to infinite dimensional spaces. My analysis shows that the algorithm is implicitly able to estimate the L2 norm of the unknown competitor, U, achieving a regret bound of the order of O(U log(U T + 1))√T), ins ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
(Show Context)
I present a new online learning algorithm that extends the exponentiated gradient framework to infinite dimensional spaces. My analysis shows that the algorithm is implicitly able to estimate the L2 norm of the unknown competitor, U, achieving a regret bound of the order of O(U log(U T + 1))√T), instead of the standard O((U2 + 1)√T), achievable without knowing U. For this analysis, I introduce novel tools for algorithms with timevarying regularizers, through the use of local smoothness. Through a lower bound, I also show that the algorithm is optimal up to log(UT) term for linear and Lipschitz losses. 1
Unconstrained Online Linear Learning in Hilbert Spaces: Minimax Algorithms and Normal Approximations
"... We study algorithms for online linear optimization in Hilbert spaces, focusing on the case where the player is unconstrained. We develop a novel characterization of a large class of minimax algorithms, recovering, and even improving, several previous results as immediate corollaries. Moreover, using ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
We study algorithms for online linear optimization in Hilbert spaces, focusing on the case where the player is unconstrained. We develop a novel characterization of a large class of minimax algorithms, recovering, and even improving, several previous results as immediate corollaries. Moreover, using our tools, we develop an algorithm that provides a regret bound ofO U
ScaleFree Algorithms for Online Linear Optimization
"... Abstract. We design algorithms for online linear optimization that have optimal regret and at the same time do not need to know any upper or lower bounds on the norm of the loss vectors. We achieve adaptiveness to norms of loss vectors by scale invariance, i.e., our algorithms make exactly the same ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. We design algorithms for online linear optimization that have optimal regret and at the same time do not need to know any upper or lower bounds on the norm of the loss vectors. We achieve adaptiveness to norms of loss vectors by scale invariance, i.e., our algorithms make exactly the same decisions if the sequence of loss vectors is multiplied by any positive constant. Our algorithms work for any decision set, bounded or unbounded. For unbounded decisions sets, these are the first truly adaptive algorithms for online linear optimization. 1
Adaptivity and Optimism: An Improved Exponentiated Gradient Algorithm
"... We present an adaptive variant of the exponentiated gradient algorithm. Leveraging the optimistic learning framework of Rakhlin & Sridharan (2012), we obtain regret bounds that in the learning from experts setting depend on the variance and path length of the best expert, improving on resul ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
We present an adaptive variant of the exponentiated gradient algorithm. Leveraging the optimistic learning framework of Rakhlin & Sridharan (2012), we obtain regret bounds that in the learning from experts setting depend on the variance and path length of the best expert, improving on results by Hazan & Kale (2008) and Chiang et al. (2012), and resolving an open problem posed by Kale (2012). Our techniques naturally extend to matrixvalued loss functions, where we present an adaptive matrix exponentiated gradient algorithm. To obtain the optimal regret bound in the matrix case, we generalize the FollowtheRegularizedLeader algorithm to vectorvalued payoffs, which may be of independent interest. 1.
Efficient Second Order Online Learning by Sketching
"... Abstract We propose Sketched Online Newton (SON), an online second order learning algorithm that enjoys substantially improved regret guarantees for illconditioned data. SON is an enhanced version of the Online Newton Step, which, via sketching techniques enjoys a running time linear in the dimens ..."
Abstract
 Add to MetaCart
Abstract We propose Sketched Online Newton (SON), an online second order learning algorithm that enjoys substantially improved regret guarantees for illconditioned data. SON is an enhanced version of the Online Newton Step, which, via sketching techniques enjoys a running time linear in the dimension and sketch size. We further develop sparse forms of the sketching methods (such as Oja's rule), making the computation linear in the sparsity of features. Together, the algorithm eliminates all computational obstacles in previous second order online learning approaches.