#### DMCA

## Convex games in banach spaces (2009)

### Cached

### Download Links

Citations: | 12 - 6 self |

### Citations

2834 | Learning with Kernels,
- Scholkopf, Smola
- 2002
(Show Context)
Citation Context ...e work, we also plan to convert the player strategies given here into implementable algorithms. Online learning algorithms can be implemented in infinite dimensional reproducing kernel Hilbert spaces =-=[21]-=- by exploiting the representer theorem and duality. We can, therefore, hope to implement online learning algorithms in infinite dimensional Banach spaces where some analogue of the representer theorem... |

338 |
Problem complexity and method efficiency in optimization. Wiley-Interscience series in discrete mathematics.
- Nemirovskii, Yudin
- 1983
(Show Context)
Citation Context ... describe player strategies that achieve the optimal rates for these convex games. These strategies are all based on the Mirror Descent algorithm that originated in the convex optimization literature =-=[6]-=-. Usually Mirror Descent is run with a strongly convex function but it turns out that it can also be analyzed in our Banach space setting if it is run with a q-uniformly convex function Ψ. Moreover, w... |

298 | Online convex programming and generalized infinitesimal gradient ascent.
- Zinkevich
- 2003
(Show Context)
Citation Context ... new algorithms even for the more familiar Hilbert space case where the loss functions on each round have varying exponents of uniform convexity (curvature). 1 Introduction Online convex optimization =-=[1, 2, 3]-=- has emerged as an abstraction that allows a unified treatment of a variety of online learning problems where the underlying loss function is convex. In this abstraction, a T -round game is played bet... |

211 | Logarithmic regret algorithms for online convex optimization.
- Hazan, Kalai, et al.
- 2006
(Show Context)
Citation Context ... new algorithms even for the more familiar Hilbert space case where the loss functions on each round have varying exponents of uniform convexity (curvature). 1 Introduction Online convex optimization =-=[1, 2, 3]-=- has emerged as an abstraction that allows a unified treatment of a variety of online learning problems where the underlying loss function is convex. In this abstraction, a T -round game is played bet... |

132 |
Martingales with values in uniformly convex spaces.
- Pisier
- 1975
(Show Context)
Citation Context ...and any θ ∈ [0, 1], ℓ(θv1 + (1 − θ)v2) ≤ θℓ(v1) + (1 − θ)ℓ(v2) − If C ≥ 1 we simply say that the function ℓ is q-uniformly convex. Cθ(1 − θ) q ‖v1 − v2‖ q . The following remarkable theorem of Pisier =-=[17]-=- shows that the concept of M-types and existence of uniformly convex functions in the Banach space are intimately connected. Theorem (Pisier). A Banach space B has M-cotype q iff there exists a q-unif... |

118 |
Probabilistic methods in the geometry of Banach spaces.
- Pisier
- 1985
(Show Context)
Citation Context ...ence sequence d1, . . . , dT with values in B, ( T∑ E [‖dt‖ q ] t=1 )1/q [∥ ∥∥∥∥ T∑ ≤ C E A closely related notion in Banach space theory is that of super-reflexivity. details. t=1 dt ] . (4) ∥ Refer =-=[16]-=- for more Definition 2. A Banach space B is super-reflexive if no non-reflexive space is finitely representable in B. A result of Pisier [16] shows that a Banach space B has non-trivial M-type (p ⋆ > ... |

106 |
Convex Analysis in General Vector Spaces.
- Zalinescu
- 2002
(Show Context)
Citation Context ...d on linear game on unit ball (Theorem 4) and property (9). For the upper bound, note that any convex function bounded by 1 on the scaled ball r U (B) is 2 ɛr Lipschitz on the ball of radius (1 − ɛ)r =-=[18]-=-. Hence, by upper bound in Theorem 4 and property(9), we see that there exists a strategy say W whose regret on the ball of radius r(1 − ɛ) is bounded by C ɛ T 1 p for any p ∈ [1, p ⋆ (B ⋆ ). That is... |

64 | Online Learning: Theory, Algorithms, and Applications.
- Shalev-Shwartz
- 2007
(Show Context)
Citation Context ... new algorithms even for the more familiar Hilbert space case where the loss functions on each round have varying exponents of uniform convexity (curvature). 1 Introduction Online convex optimization =-=[1, 2, 3]-=- has emerged as an abstraction that allows a unified treatment of a variety of online learning problems where the underlying loss function is convex. In this abstraction, a T -round game is played bet... |

64 | Basic Concepts in the Geometry of Banach Spaces”, - Johnson, Lindenstrauss - 2001 |

47 | A stochastic view of optimal regret through minimax duality.
- Abernethy, Agarwal, et al.
- 2009
(Show Context)
Citation Context ...l Θ ⋆ (·) notation hides factors that are o(T ɛ ) for every ɛ > 0. 1The idea of exploiting minimax-maximin duality to analyze optimal regret rates also appears in the recent work of Abernethy et al. =-=[6]-=-. The earliest papers we know of that explore the connection of the type of a Banach space to learning theory are those of Donahue et al. [7] and Gurvits [8]. More recently, Mendelson and Schechtman [... |

44 | Adaptive online gradient descent.
- Bartlett, Hazan, et al.
- 2007
(Show Context)
Citation Context ...adaptive algorithm, building on previous work, that adapts to the exponent of uniform convexity in the adversary’s functions. Our results have novel implications even in a Hilbert space. For example, =-=[7]-=- showed how to adapt to an adversary that mixes linear and strong convex functions in its moves. We can now allow this mix to also consist of functions with intermediate degrees of uniform convexity. ... |

35 | Optimal strategies and minimax lower bounds for online convex games.
- Abernethy, Bartlett, et al.
- 2008
(Show Context)
Citation Context ...is O( √ T ). However, if the adversary is constrained to play strongly convex and Lipschitz functions, the regret can be brought down to O(log T ). Further, it is also known, via minimax lower bounds =-=[5]-=-, that these are the best possible rates in these situations. In a general Banach space, strongly convex functions might not even exist. We will, therefore, need a generalization of strong convexity c... |

30 |
A note on a scale-sensitive dimension of linear bounded functionals in Banach spaces
- Gurvits
- 1997
(Show Context)
Citation Context ...ars in the recent work of Abernethy et al. [8]. The earliest papers we know of that explore the connection of the type of a Banach space to learning theory are those of Donahue et al. [9] and Gurvits =-=[10]-=-. Mendelson and Schechtman [11] gave estimates of the fat-shattering dimension of linear functionals on a Banach space in terms of its type. In the context of online regression with squared loss, Vovk... |

27 | Rates of convex approximation in non-Hilbert spaces.
- Donahue, Darken, et al.
- 1997
(Show Context)
Citation Context ... rates also appears in the recent work of Abernethy et al. [8]. The earliest papers we know of that explore the connection of the type of a Banach space to learning theory are those of Donahue et al. =-=[9]-=- and Gurvits [10]. Mendelson and Schechtman [11] gave estimates of the fat-shattering dimension of linear functionals on a Banach space in terms of its type. In the context of online regression with s... |

16 | Reproducing Kernel Banach Spaces for Machine Learning.
- Zhang, Xu, et al.
- 2009
(Show Context)
Citation Context ... Hilbert space, but in some Banach space. He also mentions “Banach Learning” as an open problem in his online prediction wiki2 . For recent work exploring Banach spaces for learning applications, see =-=[12, 13, 14]-=-. These papers also give more reasons for considering general Banach spaces in Learning Theory. Outline The rest of the paper is organized as follows. In Section 2, we formally define the minimax and ... |

13 |
les espaces de Banach qui ne contiennent pas uniformément de l1n
- Pisier, Sur
(Show Context)
Citation Context ...om the upper bounds in Theorems 4 and 5. The reverse implications 1 ⇒ 3 and 2 ⇒ 3, in turn, follow from the lower bounds in those theorems. The equivalence of 3 and 4 is due to deep results of Pisier =-=[19]-=-. The convex-Lipschitz games (and q-uniformly convex-Lipschitz games considered below) depend, by definition, not only on the player’s set W but also on the norm ‖ · ‖ of the underlying Banach space B... |

9 |
Large-margin classification in Banach spaces
- Lee
- 2007
(Show Context)
Citation Context ... Hilbert space, but in some Banach space. He also mentions “Banach Learning” as an open problem in his online prediction wiki2 . For recent work exploring Banach spaces for learning applications, see =-=[12, 13, 14]-=-. These papers also give more reasons for considering general Banach spaces in Learning Theory. Outline The rest of the paper is organized as follows. In Section 2, we formally define the minimax and ... |

8 | The shattering dimension of sets of linear functionals,”
- Mendelson, Schechtman
- 2004
(Show Context)
Citation Context ...ethy et al. [8]. The earliest papers we know of that explore the connection of the type of a Banach space to learning theory are those of Donahue et al. [9] and Gurvits [10]. Mendelson and Schechtman =-=[11]-=- gave estimates of the fat-shattering dimension of linear functionals on a Banach space in terms of its type. In the context of online regression with squared loss, Vovk [4] also gives rates worse tha... |

8 |
A convexity condition in Banach spaces and the strong law of large numbers
- Beck
- 1962
(Show Context)
Citation Context ... some constant C such that for any T ≥ 1 and any v1, . . . , vT ∈ B, ( T∑ ‖vt‖ q )1/q [∥ ∥∥∥∥ T∑ ] ≤ C E ɛtvt , (4) ∥ t=1 where ɛt’s are i.i.d. Rademacher (symmetric ±1-valued) random variables. Beck =-=[13]-=- defined B-convexity to study strong law of large numbers in Banach spaces. A Banach space B is B-convex if there exists T > 0 and an ɛ > 0 such that for any v1, . . . , vT ∈ B with ‖vt‖ ≤ 1, we have ... |

5 | Uniformly convex functions an Banach spaces
- Borwein, Guirao, et al.
(Show Context)
Citation Context ...pschitz, q-uniformly convex loss functions. Note that given such a function, there exists a norm |·| such that |·| ≤ ‖·‖ ≤ L |·| (ie. an equivalent norm) and 1 q |·|q is a q-uniformly convex function =-=[20]-=-. Given this we consider a game where adversary plays only functions from lincvxq,L(W) := {ℓ(w) = 〈w, x〉 + 1 q |w|q : |x| ⋆ ≤ L − 1} Note that since the above is L-Lipschitz w.r.t. |·|, it is automati... |

5 |
Learning: Theory, Algorithms, and Applications
- Online
- 2007
(Show Context)
Citation Context |

3 | Competing with wild prediction rules
- Vovk
(Show Context)
Citation Context ...nsidering ft as a “point” in a function space is very fruitful and it very natural to assume that the space of functions that the learner can use is a Banach space of functions. For more details, see =-=[4]-=-. In the Hilbert space setting, it is known that the “degree of convexity” or “curvature” of the functions ℓt played by the adversary has a significant impact on the achievable regret rates. For examp... |

1 | Aggregating algorithm competing with Banach lattices, 2010. arXiv preprint arXiv:1002.0709 available at http://arxiv.org/abs/1002.0709
- Zhdanov, Chernov, et al.
(Show Context)
Citation Context ... Hilbert space, but in some Banach space. He also mentions “Banach Learning” as an open problem in his online prediction wiki2 . For recent work exploring Banach spaces for learning applications, see =-=[12, 13, 14]-=-. These papers also give more reasons for considering general Banach spaces in Learning Theory. Outline The rest of the paper is organized as follows. In Section 2, we formally define the minimax and ... |