Results 1 
4 of
4
The sample complexity of revenue maximization
 In Proceedings of the 46th Annual ACM Symposium on Theory of Computing
, 2014
"... In the design and analysis of revenuemaximizing auctions, auction performance is typically measured with respect to a prior distribution over inputs. The most obvious source for such a distribution is past data. The goal of this paper is to understand how much data is necessary and sufficient to g ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
(Show Context)
In the design and analysis of revenuemaximizing auctions, auction performance is typically measured with respect to a prior distribution over inputs. The most obvious source for such a distribution is past data. The goal of this paper is to understand how much data is necessary and sufficient to guarantee nearoptimal expected revenue. Our basic model is a singleitem auction in which bidders ’ valuations are drawn independently from unknown and nonidentical distributions. The seller is given m samples from each of these distributions “for free ” and chooses an auction to run on a fresh sample. How large does m need to be, as a function of the number k of bidders and > 0, so that a (1 − )approximation of the optimal revenue is achievable? We prove that, under standard tail conditions on the underlying distributions, m = poly(k, 1 ) samples are necessary and sufficient. Our lower bound stands in contrast to many recent results on simple and priorindependent auctions and fundamentally involves the interplay between bidder competition, nonidentical distributions, and a very close (but still constant) approximation of the optimal revenue. It effectively shows that the only way to achieve a sufficiently good constant approximation of the optimal revenue is through a detailed understanding of bidders ’ valuation distributions. Our upper bound is constructive and applies in particular to a variant of the empirical Myerson auction, the natural auction that runs the revenuemaximizing auction with respect to the empirical distributions of the samples. Our sample complexity lower bound depends on the set of allowable distributions, and to capture this we introduce αstrongly regular distributions, which interpolate between the wellstudied classes of regular (α = 0) and MHR (α = 1) distributions. We give evidence that this definition is of independent interest. 1
Online learning in repeated auctions.
"... Abstract. Motivated by online advertising auctions, we consider repeated Vickrey auctions where goods of unknown value are sold sequentially and bidders only learn (potentially noisy) information about a good’s value once it is purchased. We adopt an online learning approach with bandit feedback ..."
Abstract
 Add to MetaCart
Abstract. Motivated by online advertising auctions, we consider repeated Vickrey auctions where goods of unknown value are sold sequentially and bidders only learn (potentially noisy) information about a good’s value once it is purchased. We adopt an online learning approach with bandit feedback to model this problem and derive bidding strategies for two models: stochastic and adversarial. In the stochastic model, the observed values of the goods are random variables centered around the true value of the good. In this case, logarithmic regret is achievable when competing against well behaved adversaries. In the adversarial model, the goods need not be identical and we simply compare our performance against that of the best fixed bid in hindsight. We show that sublinear regret is also achievable in this case and prove matching minimax lower bounds. To our knowledge, this is the first complete set of strategies for bidders participating in auctions of this type.
The Limitations of Optimization from Samples
"... As we grow highly dependent on data for making predictions, we translate these predictions into models that help us make informed decisions. But what are the guarantees we have? Can we optimize decisions on models learned from data and be guaranteed that we achieve desirable outcomes? In this paper ..."
Abstract
 Add to MetaCart
(Show Context)
As we grow highly dependent on data for making predictions, we translate these predictions into models that help us make informed decisions. But what are the guarantees we have? Can we optimize decisions on models learned from data and be guaranteed that we achieve desirable outcomes? In this paper we formalize this question through a novel model called approximation from samples (APS). In the APS model, we are given sampled values of a function drawn from some distribution and our objective is to optimize the function under some constraint. Our main interest is in the following question: are functions that are learnable (from samples) and approximable (given oracle access to the function) also approximable from samples? We show that there are classes of submodular functions which have desirable approximation and learnability guarantees and for which no reasonable approximation from samples is achievable. In particular, our main result shows that even for maximization of coverage functions under a cardinality constraint k, there exists a hypothesis class of functions that cannot be approximated within a factor of n−1/4+ (for any constant > 0) of the optimal solution, from samples drawn from the uniform distribution over all sets of size at most k. In the general case of monotone submodular functions, we show an n−1/3+ lower bound and an almost matching Ω̃(n−1/3)approximation from samples algorithm. Additive and unitdemand functions can be approximated from samples to within arbitrarily good precision. Finally, we also consider a corresponding notion of additive approximation from samples for continuous optimization, and show nearoptimal hardness for concave maximization and convex minimization. 1