Results 1 
4 of
4
Online Learning under Delayed Feedback
"... Online learning with delayed feedback has received increasing attention recently due to its several applications in distributed, webbased learning problems. In this paper we provide a systematic study of the topic, and analyze the effect of delay on the regret of online learning algorithms. Somewha ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
Online learning with delayed feedback has received increasing attention recently due to its several applications in distributed, webbased learning problems. In this paper we provide a systematic study of the topic, and analyze the effect of delay on the regret of online learning algorithms. Somewhat surprisingly, it turns out that delay increases the regret in a multiplicative way in adversarial problems, and in an additive way in stochastic problems. We give metaalgorithms that transform, in a blackbox fashion, algorithms developed for the nondelayed case into ones that can handle the presence of delays in the feedback loop. Modifications of the wellknown UCB algorithm are also developed for the bandit problem with delayed feedback, with the advantage over the metaalgorithms that they can be implemented with lower complexity. 1.
Active learning using online algorithms
 In ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD
, 2011
"... This paper describes a new technique and analysis for using online learning algorithms to solve active learning problems. Our algorithm is called Active Vote, and it works by actively selecting instances that force several perturbed copies of an online algorithm to make mistakes. The main intuitio ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
This paper describes a new technique and analysis for using online learning algorithms to solve active learning problems. Our algorithm is called Active Vote, and it works by actively selecting instances that force several perturbed copies of an online algorithm to make mistakes. The main intuition for our result is based on the fact that the number of mistakes made by the optimal online algorithm is a lower bound on the number of labels needed for active learning. We provide performance bounds for Active Vote in both a batch and online model of active learning. These performance bounds depend on the algorithm having a set of unlabeled instances in which the various perturbed online algorithms disagree. The motivating application for Active Vote is an Internet advertisement rating program. We conduct experiments using data collected for this advertisement problem along with experiments using standard datasets. We show Active Vote can achieve an order of magnitude decrease in the number of labeled instances over various passive learning algorithms such as Support Vector Machines.
Combinatorial Fusion with Online Learning Algorithms
"... Abstract—We give a range of techniques to effectively apply online learning algorithms, such as Perceptron and Winnow, to both online and batch fusion problems. Our first technique is a new way to combine the predictions of multiple hypotheses. These hypotheses are selected from the many hypothese ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—We give a range of techniques to effectively apply online learning algorithms, such as Perceptron and Winnow, to both online and batch fusion problems. Our first technique is a new way to combine the predictions of multiple hypotheses. These hypotheses are selected from the many hypotheses that are generated in the course of online learning. Our second technique is to save old instances and use them for extra updates on the current hypothesis. These extra updates can decrease the number of mistakes made on new instances. Both techniques keep the algorithms efficient and allow the algorithms to learn in the presence of large amounts of noise.