Results 1  10
of
534
Position auctions
, 2007
"... I analyze the equilibria of a game based on the ad auction used by Google and Yahoo. This auction is closely related to the assignment game studied by ShapleyShubik, DemangeGaleSotomayer and RothSotomayer. However, due to the special structure of preferences, the equilibria of the ad auction can ..."
Abstract

Cited by 317 (4 self)
 Add to MetaCart
I analyze the equilibria of a game based on the ad auction used by Google and Yahoo. This auction is closely related to the assignment game studied by ShapleyShubik, DemangeGaleSotomayer and RothSotomayer. However, due to the special structure of preferences, the equilibria of the ad auction can be calculated explicitly and some known results can be sharpened. I provide some empirical evidence that the Nash equilibria of the position auction describe the basic properties of the prices observed in Google's ad auction reasonably accurately.
Adwords and generalized online matching
 In FOCS ’05: Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science
, 2005
"... How does a search engine company decide what ads to display with each query so as to maximize its revenue? This turns out to be a generalization of the online bipartite matching problem. We introduce the notion of a tradeoff revealing LP and use it to derive two optimal algorithms achieving competit ..."
Abstract

Cited by 144 (6 self)
 Add to MetaCart
How does a search engine company decide what ads to display with each query so as to maximize its revenue? This turns out to be a generalization of the online bipartite matching problem. We introduce the notion of a tradeoff revealing LP and use it to derive two optimal algorithms achieving competitive ratios of 1 − 1/e for this problem. 1
Truthful Auctions for Pricing Search Keywords
, 2006
"... We present a truthful auction for pricing advertising slots on a webpage assuming that advertisements for different merchants must be ranked in decreasing order of their (weighted) bids. This captures both the Overture model where bidders are ranked in order of the submitted bids, and the Google mo ..."
Abstract

Cited by 133 (5 self)
 Add to MetaCart
We present a truthful auction for pricing advertising slots on a webpage assuming that advertisements for different merchants must be ranked in decreasing order of their (weighted) bids. This captures both the Overture model where bidders are ranked in order of the submitted bids, and the Google model where bidders are ranked in order of the expected revenue (or utility) that their advertisement generates. Assuming separable clickthrough rates, we prove revenueequivalence between our auction and the nontruthful nextprice auctions currently in use.
An analysis of alternative slot auction designs for sponsored search
 In Proceedings of the 7th ACM conference on Electronic commerce
, 2006
"... Billions of dollars are spent each year on sponsored search, a form of advertising where merchants pay for placement alongside web search results. Slots for ad listings are allocated via an auctionstyle mechanism where the higher a merchant bids, the more likely his ad is to appear above other ads ..."
Abstract

Cited by 92 (6 self)
 Add to MetaCart
(Show Context)
Billions of dollars are spent each year on sponsored search, a form of advertising where merchants pay for placement alongside web search results. Slots for ad listings are allocated via an auctionstyle mechanism where the higher a merchant bids, the more likely his ad is to appear above other ads on the page. In this paper we analyze the incentive, efficiency, and revenue properties of two slot auction designs: “rank by bid ” (RBB) and “rank by revenue” (RBR), which correspond to stylized versions of the mechanisms currently used by Yahoo! and Google, respectively. We also consider first and secondprice payment rules together with each of these allocation rules, as both have been used historically. We consider both the “shortrun ” incomplete information setting and the “longrun ” complete information setting. With incomplete information, neither RBB nor RBR are truthful with either first or second pricing. We find that the informational requirements of RBB are much weaker than those of RBR, but that RBR is efficient whereas RBB is not. We also show that no revenue ranking of RBB and RBR is possible given an arbitrary distribution over bidder values and relevance. With complete information, we find that no equilibrium exists with first pricing using either RBB or RBR. We show that there typically exists a multitude of equilibria with second pricing, and we bound the divergence of (economic) value in such equilibria from the value obtained assuming all merchants bid truthfully.
S4: Distributed stream computing platform
 In Intl. Workshop on Knowledge Discovery Using Cloud and Distributed Computing Platforms (KDCloud
, 2010
"... Abstract—S4 is a generalpurpose, distributed, scalable, partially faulttolerant, pluggable platform that allows programmers to easily develop applications for processing continuous unbounded streams of data. Keyed data events are routed with affinity to Processing Elements (PEs), which consume the ..."
Abstract

Cited by 86 (0 self)
 Add to MetaCart
(Show Context)
Abstract—S4 is a generalpurpose, distributed, scalable, partially faulttolerant, pluggable platform that allows programmers to easily develop applications for processing continuous unbounded streams of data. Keyed data events are routed with affinity to Processing Elements (PEs), which consume the events and do one or both of the following: (1) emit one or more events which may be consumed by other PEs, (2) publish results. The architecture resembles the Actors model [1], providing semantics of encapsulation and location transparency, thus allowing applications to be massively concurrent while exposing a simple programming interface to application developers. In this paper, we outline the S4 architecture in detail, describe various applications, including reallife deployments. Our design is primarily driven by large scale applications for data mining and machine learning in a production environment. We show that the S4 design is surprisingly flexible and lends itself to run in large clusters built with commodity hardware. Keywordsactors programming model; complex event processing; concurrent programming; data processing; distributed programming; mapreduce; middleware; parallel programming; realtime search; software design; stream computing I.
WebScale Bayesian ClickThrough Rate Prediction for Sponsored Search Advertising in Microsoft’s Bing Search Engine
"... We describe a new Bayesian clickthrough rate (CTR) prediction algorithm used for Sponsored Search in Microsoft’s Bing search engine. The algorithm is based on a probit regression model that maps discrete or realvalued input features to probabilities. It maintains Gaussian beliefs over weights of t ..."
Abstract

Cited by 71 (2 self)
 Add to MetaCart
(Show Context)
We describe a new Bayesian clickthrough rate (CTR) prediction algorithm used for Sponsored Search in Microsoft’s Bing search engine. The algorithm is based on a probit regression model that maps discrete or realvalued input features to probabilities. It maintains Gaussian beliefs over weights of the model and performs Gaussian online updates derived from approximate message passing. Scalability of the algorithm is ensured through a principled weight pruning procedure and an approximate parallel implementation. We discuss the challenges arising from evaluating and tuning the predictor as part of the complex system of sponsored
Greedy bidding strategies for keyword auctions
 IN EIGHTH ACM CONFERENCE ON ELECTRONIC COMMERCE
, 2007
"... How should players bid in keyword auctions such as those used by Google, Yahoo! and MSN? We consider greedy bidding strategies for a repeated auction on a single keyword, where in each round, each player chooses some optimal bid for the next round, assuming that the other players merely repeat their ..."
Abstract

Cited by 69 (7 self)
 Add to MetaCart
How should players bid in keyword auctions such as those used by Google, Yahoo! and MSN? We consider greedy bidding strategies for a repeated auction on a single keyword, where in each round, each player chooses some optimal bid for the next round, assuming that the other players merely repeat their previous bid. We study the revenue, convergence and robustness properties of such strategies. Most interesting among these is a strategy we call the balanced bidding strategy (bb): it is known that bb has a unique fixed point with payments identical to those of the VCG mechanism. We show that if all players use the bb strategy and update each round, bb converges when the number of slots is at most 2, but does not always converge for 3 or more slots. On the other hand, we present a simple variant which is guaranteed to converge to the same fixed point for any number of slots. In a model in which only one randomly chosen player updates each round according to the bb strategy, we prove that convergence occurs with probability 1. We complement our theoretical results with empirical studies.
The adwords problem: Online keyword matching with budgeted bidders under random permutations
 In Proc. 10th Annual ACM Conference on Electronic Commerge (EC
, 2009
"... We consider the problem of a search engine trying to assign a sequence of search keywords to a set of competing bidders, each with a daily spending limit. The goal is to maximize the revenue generated by these keyword sales, bearing in mind that, as some bidders may eventually exceed their budget, n ..."
Abstract

Cited by 69 (7 self)
 Add to MetaCart
(Show Context)
We consider the problem of a search engine trying to assign a sequence of search keywords to a set of competing bidders, each with a daily spending limit. The goal is to maximize the revenue generated by these keyword sales, bearing in mind that, as some bidders may eventually exceed their budget, not all keywords should be sold to the highest bidder. We assume that the sequence of keywords (or equivalently, of bids) is revealed online. Our concern will be the competitive ratio for this problem versus the offline optimum. We extend the current literature on this problem by considering the setting where the keywords arrive in a random order. In this setting we are able to achieve a competitive ratio of 1 − ɛ under some mild, but necessary, assumptions.
Strategic bidder behavior in sponsored search auctions
 In In Workshop on Sponsored Search Auctions, ACM Electronic Commerce
, 2005
"... We examine prior and current sponsored search auctions and find evidence of strategic bidder behavior. Between June 15, 2002, and June 14, 2003, we estimate that Overture’s revenue from sponsored search could have been substantially higher if it had been able to prevent this strategic behavior. We a ..."
Abstract

Cited by 67 (1 self)
 Add to MetaCart
(Show Context)
We examine prior and current sponsored search auctions and find evidence of strategic bidder behavior. Between June 15, 2002, and June 14, 2003, we estimate that Overture’s revenue from sponsored search could have been substantially higher if it had been able to prevent this strategic behavior. We also show that advertisers ’ strategic behavior has not disappeared over time; rather, such behavior remains present on both Google and Overture. We conclude by discussing alternative auction designs that could reduce this strategic behavior and raise search engines ’ revenue, as well as increase the overall efficiency of the market.
Online budgeted matching in random input models with applications to adwords
 In SODA 2008
"... We study an online assignment problem, motivated by Adwords Allocation, in which queries are to be assigned to bidders with budget constraints. We analyze the performance of the Greedy algorithm (which assigns each query to the highest bidder) in a randomized input model with queries arriving in a r ..."
Abstract

Cited by 67 (10 self)
 Add to MetaCart
We study an online assignment problem, motivated by Adwords Allocation, in which queries are to be assigned to bidders with budget constraints. We analyze the performance of the Greedy algorithm (which assigns each query to the highest bidder) in a randomized input model with queries arriving in a random permutation. Our main result is a tight analysis of Greedy in this model showing that it has a competitive ratio of 1 − 1/e for maximizing the value of the assignment. We also consider the more standard i.i.d. model of input, and show that our analysis holds there as well. This is to be contrasted with the worst case analysis of [MSVV05] which shows that Greedy has a ratio of 1/2, and that the optimal algorithm presented there has a ratio of 1 − 1/e. The analysis of Greedy is important in the Adwords setting because it is the natural allocation algorithm for an auctionstyle process. From a theoretical perspective, our result simplifies and generalizes the classic algorithm of Karp, Vazirani and Vazirani for online bipartite matching. Our results include a new proof to show that the Ranking algorithm of [KVV90] has a ratio of 1 − 1/e in the worst case. It has been recently discovered [KV07] (independent of our results) that one of the crucial lemmas in [KVV90], related to a certain reduction, is incorrect. Our proof is direct, in that it does not go via such a reduction, which also enables us to generalize the analysis to our online assignment problem. 1