Results 1 -
9 of
9
Making the most of your samples
- In Proceedings of the Sixteenth ACM Conference on Economics and Computation, EC ’15
"... We study the problem of setting a price for a potential buyer with a valuation drawn from an unknown distribution D. The seller has “data ” about D in the form of m ≥ 1 i.i.d. samples, and the algorithmic challenge is to use these samples to obtain expected revenue as close as possible to what could ..."
Abstract
-
Cited by 3 (2 self)
- Add to MetaCart
We study the problem of setting a price for a potential buyer with a valuation drawn from an unknown distribution D. The seller has “data ” about D in the form of m ≥ 1 i.i.d. samples, and the algorithmic challenge is to use these samples to obtain expected revenue as close as possible to what could be achieved with advance knowledge of D. Our first set of results quantifies the number of samples m that are necessary and sufficient to obtain a (1 − )-approximation. For example, for an unknown distribution that satisfies the monotone hazard rate (MHR) condition, we prove that Θ̃(−3/2) samples are necessary and sufficient. Remarkably, this is fewer samples than is necessary to accurately estimate the expected revenue obtained by even a single reserve price. We also prove essentially tight sample complexity bounds for regular distributions, bounded-support distributions, and a wide class of irregular distributions. Our lower bound approach borrows tools from differential privacy and information theory, and we believe it could find further applications in auction theory. Our second set of results considers the single-sample case. For regular distributions, we prove that no pricing strategy is better than 12-approximate, and this is optimal by the Bulow-Klemperer theorem. For MHR distributions, we show how to do better: we give a simple pricing strategy that guarantees expected revenue at least 0.589 times the maximum possible. We also prove that no pricing strategy achieves an approximation guarantee better than e4 ≈.68.
Randomization beats Second Price as a Prior-Independent Auction
, 2015
"... Designing revenue optimal auctions for selling an item to n symmetric bidders is a funda-mental problem in mechanism design. Myerson (1981) shows that the second price auction with an appropriate reserve price is optimal when bidders ’ values are drawn i.i.d. from a known reg-ular distribution. A co ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
Designing revenue optimal auctions for selling an item to n symmetric bidders is a funda-mental problem in mechanism design. Myerson (1981) shows that the second price auction with an appropriate reserve price is optimal when bidders ’ values are drawn i.i.d. from a known reg-ular distribution. A cornerstone in the prior-independent revenue maximization literature is a result by Bulow and Klemperer (1996) showing that the second price auction without a reserve achieves (n − 1)/n of the optimal revenue in the worst case. We construct a randomized mechanism that strictly outperforms the second price auction in this setting. Our mechanism inflates the second highest bid with a probability that varies with n. For two bidders we improve the performance guarantee from 0.5 to 0.512 of the optimal revenue. We also resolve a question in the design of revenue optimal mechanisms that have access to a single sample from an unknown distribution. We show that a randomized mechanism strictly outperforms all deterministic mechanisms in terms of worst case guarantee. 1
Learning Simple Auctions
, 2016
"... Abstract We present a general framework for proving polynomial sample complexity bounds for the problem of learning from samples the best auction in a class of "simple" auctions. Our framework captures the most prominent examples of "simple" auctions, including anonymous and non ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract We present a general framework for proving polynomial sample complexity bounds for the problem of learning from samples the best auction in a class of "simple" auctions. Our framework captures the most prominent examples of "simple" auctions, including anonymous and non-anonymous item and bundle pricings, with either a single or multiple buyers. The first step of the framework is to show that the set of auction allocation rules have a low-dimensional representation. The second step shows that, across the subset of auctions that share the same allocations on a given set of samples, the auction revenue varies in a lowdimensional way. Our results imply that in typical scenarios where it is possible to compute a near-optimal simple auction with a known prior, it is also possible to compute such an auction with an unknown prior, given a polynomial number of samples.
Online learning in repeated auctions.
"... Abstract. Motivated by online advertising auctions, we consider re-peated Vickrey auctions where goods of unknown value are sold se-quentially and bidders only learn (potentially noisy) information about a good’s value once it is purchased. We adopt an online learning ap-proach with bandit feedback ..."
Abstract
- Add to MetaCart
Abstract. Motivated by online advertising auctions, we consider re-peated Vickrey auctions where goods of unknown value are sold se-quentially and bidders only learn (potentially noisy) information about a good’s value once it is purchased. We adopt an online learning ap-proach with bandit feedback to model this problem and derive bidding strategies for two models: stochastic and adversarial. In the stochastic model, the observed values of the goods are random variables centered around the true value of the good. In this case, logarithmic regret is achievable when competing against well behaved adversaries. In the adversarial model, the goods need not be identical and we simply com-pare our performance against that of the best fixed bid in hindsight. We show that sublinear regret is also achievable in this case and prove matching minimax lower bounds. To our knowledge, this is the first complete set of strategies for bidders participating in auctions of this type.
The Limitations of Optimization from Samples
"... As we grow highly dependent on data for making predictions, we translate these predictions into models that help us make informed decisions. But what are the guarantees we have? Can we optimize decisions on models learned from data and be guaranteed that we achieve desirable outcomes? In this paper ..."
Abstract
- Add to MetaCart
(Show Context)
As we grow highly dependent on data for making predictions, we translate these predictions into models that help us make informed decisions. But what are the guarantees we have? Can we optimize decisions on models learned from data and be guaranteed that we achieve desirable outcomes? In this paper we formalize this question through a novel model called approximation from samples (APS). In the APS model, we are given sampled values of a function drawn from some distribution and our objective is to optimize the function under some constraint. Our main interest is in the following question: are functions that are learnable (from samples) and approximable (given oracle access to the function) also approximable from samples? We show that there are classes of submodular functions which have desirable approximation and learnability guarantees and for which no reasonable approximation from samples is achiev-able. In particular, our main result shows that even for maximization of coverage functions under a cardinality constraint k, there exists a hypothesis class of functions that cannot be approximated within a factor of n−1/4+ (for any constant > 0) of the optimal solution, from samples drawn from the uniform distribution over all sets of size at most k. In the general case of monotone submodular functions, we show an n−1/3+ lower bound and an almost matching Ω̃(n−1/3)-approximation from samples algorithm. Additive and unit-demand functions can be approximated from samples to within arbitrarily good precision. Finally, we also consider a corresponding notion of additive approximation from samples for continuous optimization, and show near-optimal hardness for concave maximization and convex minimization. 1
Revenue Optimization against Strategic Buyers
"... We present a revenue optimization algorithm for posted-price auctions when fac-ing a buyer with random valuations who seeks to optimize his γ-discounted sur-plus. In order to analyze this problem we introduce the notion of -strategic buyer, a more natural notion of strategic behavior than what has b ..."
Abstract
- Add to MetaCart
(Show Context)
We present a revenue optimization algorithm for posted-price auctions when fac-ing a buyer with random valuations who seeks to optimize his γ-discounted sur-plus. In order to analyze this problem we introduce the notion of -strategic buyer, a more natural notion of strategic behavior than what has been considered in the past. We improve upon the previous state-of-the-art and achieve an optimal regret bound in O(log T + 1 / log(1/γ)) when the seller selects prices from a finite set and provide a regret bound in Õ( T + T 1/4 / log(1/γ)) when the prices offered are selected out of the interval [0, 1]. 1
Approximately Optimal Mechanism Design: Motivation, Examples, and Lessons Learned
, 2014
"... This survey describes the approximately optimal mechanism design paradigm and uses it to in-vestigate two basic questions in auction theory. First, when is complexity — in the sense of detailed distributional knowledge — an essential feature of revenue-maximizing single-item auctions? Second, do com ..."
Abstract
- Add to MetaCart
(Show Context)
This survey describes the approximately optimal mechanism design paradigm and uses it to in-vestigate two basic questions in auction theory. First, when is complexity — in the sense of detailed distributional knowledge — an essential feature of revenue-maximizing single-item auctions? Second, do combinatorial auctions require high-dimensional bid spaces to achieve good social welfare?