Results 1  10
of
65
Iterative Combinatorial Auctions: Achieving Economic and Computational Efficiency
 DEPARTMENT OF COMPUTER AND INFORMATION SCIENCE, UNIVERSITY OF PENNSYLVANIA
, 2001
"... This thesis presents new auctionbased mechanisms to coordinate systems of selfinterested and autonomous agents, and new methods to design such mechanisms and prove their optimality... ..."
Abstract

Cited by 159 (19 self)
 Add to MetaCart
This thesis presents new auctionbased mechanisms to coordinate systems of selfinterested and autonomous agents, and new methods to design such mechanisms and prove their optimality...
eMediator: A Next Generation Electronic Commerce Server
 Computational Intelligence
, 2002
"... This paper presents eMediator, an electronic commerce server prototype that demonstrates ways in which algorithmic support and gametheoretic incentive engineering can jointly improve the efficiency of ecommerce. eAuctionHouse, the configurable auction server, includes a variety of generalized combi ..."
Abstract

Cited by 123 (32 self)
 Add to MetaCart
(Show Context)
This paper presents eMediator, an electronic commerce server prototype that demonstrates ways in which algorithmic support and gametheoretic incentive engineering can jointly improve the efficiency of ecommerce. eAuctionHouse, the configurable auction server, includes a variety of generalized combinatorial auctions and exchanges, pricing schemes, bidding languages, mobile agents, and user support for choosing an auction type. We introduce two new logical bidding languages for combinatorial markets: the XOR bidding language and the ORofXORs bidding language. Unlike the traditional OR bidding language, these are fully expressive. They therefore enable the use of the ClarkeGroves pricing mechanism for motivating the bidders to bid truthfully. eAuctionHouse also supports supply/demand curve bidding. eCommitter, the leveled commitment contract optimizer, determines the optimal contract price and decommitting penalties for a variety of leveled commitment contracting mechanisms, taking into account that rational agents will decommit strategically in Nash equilibrium. It also determines the optimal decommitting strategies for any given leveled commitment contract. eExchangeHouse, the safe exchange planner, enables unenforced anonymous exchanges by dividing the exchange into chunks and sequencing those chunks to be delivered safely in alternation between the buyer and the seller.
Preference Elicitation in Combinatorial Auctions (Extended Abstract)
 IN PROCEEDINGS OF THE ACM CONFERENCE ON ELECTRONIC COMMERCE (ACMEC
, 2001
"... Combinatorial auctions (CAs) where bidders can bid on bundles of items can be very desirable market mechanisms when the items sold exhibit complementarity and/or substitutability, so the bidder's valuations for bundles are not additive. However, in a basic CA, the bidders may need to bid on e ..."
Abstract

Cited by 108 (27 self)
 Add to MetaCart
Combinatorial auctions (CAs) where bidders can bid on bundles of items can be very desirable market mechanisms when the items sold exhibit complementarity and/or substitutability, so the bidder's valuations for bundles are not additive. However, in a basic CA, the bidders may need to bid on exponentially many bundles, leading to di#culties in determining those valuations, undesirable information revelation, and unnecessary communication. In this paper we present a design of an auctioneer agent that uses topological structure inherent in the problem to reduce the amount of information that it needs from the bidders. An analysis tool is presented as well as data structures for storing and optimally assimilating the information received from the bidders. Using this information, the agent then narrows down the set of desirable (welfaremaximizing or Paretoe#cient) allocations, and decides which questions to ask next. Several algorithms are presented that ask the bidders for value, order, and rank information. A method is presented for making the elicitor incentive compatible.
Settling the Complexity of Computing TwoPlayer Nash Equilibria
"... We prove that Bimatrix, the problem of finding a Nash equilibrium in a twoplayer game, is complete for the complexity class PPAD (Polynomial Parity Argument, Directed version) introduced by Papadimitriou in 1991. Our result, building upon the work of Daskalakis, Goldberg, and Papadimitriou on the c ..."
Abstract

Cited by 88 (5 self)
 Add to MetaCart
(Show Context)
We prove that Bimatrix, the problem of finding a Nash equilibrium in a twoplayer game, is complete for the complexity class PPAD (Polynomial Parity Argument, Directed version) introduced by Papadimitriou in 1991. Our result, building upon the work of Daskalakis, Goldberg, and Papadimitriou on the complexity of fourplayer Nash equilibria [21], settles a long standing open problem in algorithmic game theory. It also serves as a starting point for a series of results concerning the complexity of twoplayer Nash equilibria. In particular, we prove the following theorems: • Bimatrix does not have a fully polynomialtime approximation scheme unless every problem in PPAD is solvable in polynomial time. • The smoothed complexity of the classic LemkeHowson algorithm and, in fact, of any algorithm for Bimatrix is not polynomial unless every problem in PPAD is solvable in randomized polynomial time. Our results also have a complexity implication in mathematical economics: • ArrowDebreu market equilibria are PPADhard to compute.
CABOB: A Fast Optimal Algorithm for Winner Determination in Combinatorial Auctions
, 2005
"... Combinatorial auctions where bidders can bid on bundles of items can lead to more economically efficient allocations, but determining the winners is NPcomplete and inapproximable. We present CABOB, a sophisticated optimal search algorithm for the problem. It uses decomposition techniques, upper and ..."
Abstract

Cited by 55 (6 self)
 Add to MetaCart
Combinatorial auctions where bidders can bid on bundles of items can lead to more economically efficient allocations, but determining the winners is NPcomplete and inapproximable. We present CABOB, a sophisticated optimal search algorithm for the problem. It uses decomposition techniques, upper and lower bounding (also across components), elaborate and dynamically chosen bidordering heuristics, and a host of structural observations. CABOB attempts to capture structure in any instance without making assumptions about the instance distribution. Experiments against the fastest prior algorithm, CPLEX 8.0, show that CABOB is often faster, seldom drastically slower, and in many cases drastically faster—especially in cases with structure. CABOB’s search runs in linear space and has significantly better anytime performance than CPLEX. We also uncover interesting aspects of the problem itself. First, problems with short bids, which were hard for the first generation of specialized algorithms, are easy. Second, almost all of the CATS distributions are easy, and the run time is virtually unaffected by the number of goods. Third, we test several random restart strategies, showing that they do not help on this problem—the runtime distribution does not have a heavy tail.
Costly valuation computation in auctions
 IN IN PROCEEDINGS OF THE EIGHTH CONFERENCE OF THEORETICAL ASPECTS OF KNOWLEDGE AND RATIONALITY (TARK VIII), SIENNA
, 2001
"... We investigate deliberation and bidding strategies of agents with unlimited but costly computation who are participating in auctions. The agents do not a priori know their valuations for the items begin auctioned. Instead they devote computational resources to compute their valuations. We present a ..."
Abstract

Cited by 54 (26 self)
 Add to MetaCart
(Show Context)
We investigate deliberation and bidding strategies of agents with unlimited but costly computation who are participating in auctions. The agents do not a priori know their valuations for the items begin auctioned. Instead they devote computational resources to compute their valuations. We present a normative model of bounded rationality where deliberation actions of agents are incorporated into strategies and equilibria are analyzed for standard auction protocols. We show that even in settings such as English auctions where information about other agents ’ valuations is revealed for free by the bidding process, agents may still compute on opponents’ valuation problems, incurring a cost, in order to determine how to bid. We compare the costly computation model of bounded rationality with a different model where computation is free but limited. For some auction mechanisms the equilibrium strategies are substantially different. It can be concluded that the model of bounded rationality impacts the agents’ equilibrium strategies and must be considered when designing mechanisms for computationally limited agents.
Expressive commerce and its application to sourcing: How we conducted $35 billion of generalized combinatorial auctions
"... Sourcing professionals buy several trillion dollars worth of goods and services yearly. We introduced a new paradigm called expressive commerce and applied it to sourcing. It combines the advantages of highly expressive human negotiation with the advantages of electronic reverse auctions. The idea i ..."
Abstract

Cited by 48 (7 self)
 Add to MetaCart
Sourcing professionals buy several trillion dollars worth of goods and services yearly. We introduced a new paradigm called expressive commerce and applied it to sourcing. It combines the advantages of highly expressive human negotiation with the advantages of electronic reverse auctions. The idea is that supply and demand are expressed in drastically greater detail than in traditional electronic auctions, and are algorithmically cleared. This creates a Pareto efficiency improvement in the allocation (a winwin between the buyer and the sellers) but the market clearing problem is a highly complex combinatorial optimization problem. We developed the world’s fastest tree search algorithms for solving it. We have hosted $35 billion of sourcing using the technology, and created $4.4 billion of harddollar savings plus numerous hardertoquantify benefits. The suppliers also benefited by being able to express production efficiencies and creativity, and through exposure problem removal. Supply networks were redesigned, with quantitative understanding of the tradeoffs, and implemented in weeks instead of months.
Computational Criticisms of the Revelation Principle
, 2003
"... The revelation principle is a cornerstone tool in mechanism design. It states that one can restrict attention, without loss in the designer's objective, to mechanisms in which A) the agents report their types completely in a single step up front, and B) the agents are motivated to be truthful. ..."
Abstract

Cited by 45 (11 self)
 Add to MetaCart
(Show Context)
The revelation principle is a cornerstone tool in mechanism design. It states that one can restrict attention, without loss in the designer's objective, to mechanisms in which A) the agents report their types completely in a single step up front, and B) the agents are motivated to be truthful. We show that reasonable constraints on computation and communication can invalidate the revelation principle. Regarding A, we show that by moving to multistep mechanisms, one can reduce exponential communication and computation to linearthereby answering a recognized important open question in mechanism design. Regarding B, we criticize the focus on truthful mechanismsa dogma that has, to our knowledge, never been criticized before. First, we study settings where the optimal truthful mechanism is complete to execute for the center. In that setting we show that by moving to insincere mechanisms, one can shift the burden of having to solve the complete problem from the center to one of the agents. Second, we study a new oracle model that captures the setting where utility values can be hard to compute even when all the pertinent information is availablea situation that occurs in many practical applications. In this model we show that by moving to insincere mechanisms, one can shift the burden of having to ask the oracle an exponential number of costly queries from the center to one of the agents. In both cases the insincere mechanism is equally good as the optimal truthful mechanism in the presence of unlimited computation. More interestingly, whereas being unable to carry out either difficult task would have hurt the center in achieving his objective in the truthful setting, if the agent is unable to carry out either difficult task, the value of the center's objec...
Partialrevelation VCG mechanism for combinatorial auctions
 In Proceddings of the National Conference on Artificial Intelligence (AAAI
"... Winner determination in combinatorial auctions has received significant interest in the AI community in the last 3 years. Another difficult problem in combinatorial auctions is that of eliciting the bidders ’ preferences. We introduce a progressive, partialrevelation mechanism that determines an ef ..."
Abstract

Cited by 43 (18 self)
 Add to MetaCart
Winner determination in combinatorial auctions has received significant interest in the AI community in the last 3 years. Another difficult problem in combinatorial auctions is that of eliciting the bidders ’ preferences. We introduce a progressive, partialrevelation mechanism that determines an efficient allocation and the Vickrey payments. The mechanism is based on a family of algorithms that explore the natural lattice structure of the bidders ’ combined preferences. The mechanism elicits utilities in a natural sequence, and aims at keeping the amount of elicited information and the effort to compute the information minimal. We present analytical results on the amount of elicitation. We show that no valuequerying algorithm that is constrained to querying feasible bundles can save more elicitation than one of our algorithms. We also show that one of our algorithms can determine the Vickrey payments as a costless byproduct of determining an optimal allocation.
Strategic implications of uncertainty over ones own private value in auctions
 Advances in Theoretical Economics
, 2006
"... A rational bidder in a privatevalue auction should be reluctant to incur the cost of perfectly estimating his value if it might not matter to the success of his bidding strategy. This can explain sniping — flurries of bids at the end of auctions — as the result of other bidders trying to avoid stim ..."
Abstract

Cited by 36 (1 self)
 Add to MetaCart
A rational bidder in a privatevalue auction should be reluctant to incur the cost of perfectly estimating his value if it might not matter to the success of his bidding strategy. This can explain sniping — flurries of bids at the end of auctions — as the result of other bidders trying to avoid stimulating the victim into learning more about his value. The idea of value discovery also explains why a bidder might increase his bid ceiling in the course of an auction and why he would like to know the private values of other bidders.