Results 1 - 10
of
123
Improved algorithms for optimal winner determination in combinatorial auctions and generalizations
, 2000
"... Combinatorial auctions can be used to reach efficient resource and task allocations in multiagent systems where the items are complementary. Determining the winners is NP-complete and inapproximable, but it was recently shown that optimal search algorithms do very well on average. This paper present ..."
Abstract
-
Cited by 582 (53 self)
- Add to MetaCart
Combinatorial auctions can be used to reach efficient resource and task allocations in multiagent systems where the items are complementary. Determining the winners is NP-complete and inapproximable, but it was recently shown that optimal search algorithms do very well on average. This paper presents a more sophisticated search algorithm for optimal (and anytime) winner determination, including structural improvements that reduce search tree size, faster data structures, and optimizations at search nodes based on driving toward, identifying and solving tractable special cases. We also uncover a more general tractable special case, and design algorithms for solving it as well as for solving known tractable special cases substantially faster. We generalize combinatorial auctions to multiple units of each item, to reserve prices on singletons as well as combinations, and to combinatorial exchanges -- all allowing for substitutability. Finally, we present algorithms for determining the winners in these generalizations.
Truth revelation in approximately efficient combinatorial auctions
- Journal of the ACM
, 2002
"... Abstract. Some important classical mechanisms considered in Microeconomics and Game Theory require the solution of a difficult optimization problem. This is true of mechanisms for combinatorial auctions, which have in recent years assumed practical importance, and in particular of the gold standard ..."
Abstract
-
Cited by 230 (1 self)
- Add to MetaCart
Abstract. Some important classical mechanisms considered in Microeconomics and Game Theory require the solution of a difficult optimization problem. This is true of mechanisms for combinatorial auctions, which have in recent years assumed practical importance, and in particular of the gold standard for combinatorial auctions, the Generalized Vickrey Auction (GVA). Traditional analysis of these mechanisms—in particular, their truth revelation properties—assumes that the optimization problems are solved precisely. In reality, these optimization problems can usually be solved only in an approximate fashion. We investigate the impact on such mechanisms of replacing exact solutions by approximate ones. Specifically, we look at a particular greedy optimization method. We show that the GVA payment scheme does not provide for a truth revealing mechanism. We introduce another scheme that does guarantee truthfulness for a restricted class of players. We demonstrate the latter property by identifying natural properties for combinatorial auctions and showing that, for our restricted class of players, they imply that truthful strategies are dominant. Those properties have applicability beyond the specific auction studied.
Winner determination in combinatorial auction generalizations
, 2002
"... Combinatorial markets where bids can be submitted on bundles of items can be economically desirable coordination mechanisms in multiagent systems where the items exhibit complementarity and substitutability. There has been a surge of recent research on winner determination in combinatorial auctions. ..."
Abstract
-
Cited by 175 (23 self)
- Add to MetaCart
Combinatorial markets where bids can be submitted on bundles of items can be economically desirable coordination mechanisms in multiagent systems where the items exhibit complementarity and substitutability. There has been a surge of recent research on winner determination in combinatorial auctions. In this paper we study a wider range of combinatorial market designs: auctions, reverse auctions, and exchanges, with one or multiple units of each item, with and without free disposal. We first theoretically characterize the complexity. The most interesting results are that reverse auctions with free disposal can be approximated, and in all of the cases without free disposal, even finding a feasible solution is ÆÈ-complete. We then ran experiments on known benchmarks as well as ones which we introduced, to study the complexity of the market variants in practice. Cases with free disposal tended to be easier than ones without. On many distributions, reverse auctions with free disposal were easier than auctions with free disposal— as the approximability would suggest—but interestingly, on one of the most realistic distributions they were harder. Single-unit exchanges were easy, but multi-unit exchanges were extremely hard. 1
CABOB: A fast optimal algorithm for combinatorial auctions
"... Combinatorial auctions where bidders can bid on bundles of items can lead to more economical allocations, but determining the winners is-complete and inapproximable. We present CABOB, a sophisticated search algorithm for the problem. It uses decomposition techniques, upper and lower bounding (also a ..."
Abstract
-
Cited by 137 (26 self)
- Add to MetaCart
Combinatorial auctions where bidders can bid on bundles of items can lead to more economical allocations, but determining the winners is-complete and inapproximable. We present CABOB, a sophisticated search algorithm for the problem. It uses decomposition techniques, upper and lower bounding (also across components), elaborate and dynamically chosen bid ordering heuristics, and a host of structural observations. Experiments against CPLEX 7.0 show that CABOB is usually faster, never drastically slower, and in many cases drastically faster. We also uncover interesting aspects of the problem itself. First, the problems with short bids that were hard for the first-generation of specialized algorithms are easy. Second, almost all of the CATS distributions are easy, and become easier with more bids. Third, we test a number of random restart strategies, and show that they do not help on this problem because the run-time distribution does not have a heavy tail (at least not for CABOB). 1
On agent-mediated electronic commerce
- IEEE Transactions on Knowledge and Data Engineering
, 2003
"... Abstract—This paper surveys and analyzes the state of the art of agent-mediated electronic commerce (e-commerce), concentrating particularly on the business-to-consumer (B2C) and business-to-business (B2B) aspects. From the consumer buying behavior perspective, agents are being used in the following ..."
Abstract
-
Cited by 111 (15 self)
- Add to MetaCart
(Show Context)
Abstract—This paper surveys and analyzes the state of the art of agent-mediated electronic commerce (e-commerce), concentrating particularly on the business-to-consumer (B2C) and business-to-business (B2B) aspects. From the consumer buying behavior perspective, agents are being used in the following activities: need identification, product brokering, buyer coalition formation, merchant brokering, and negotiation. The roles of agents in B2B e-commerce are discussed through the business-to-business transaction model that identifies agents as being employed in partnership formation, brokering, and negotiation. Having identified the roles for agents in B2C and B2B e-commerce, some of the key underpinning technologies of this vision are highlighted. Finally, we conclude by discussing the future directions and potential impediments to the wide-scale adoption of agent-mediated e-commerce. Index Terms—Agent-mediated electronic commerce, intelligent agents. 1
Issues in computational Vickrey auction
- INTERNATIONAL JOURNAL OF ELECTRONIC COMMERCE
, 2000
"... The Vickrey auction has been widely advocated for multiagent systems. First we review its limitations so as to guide practitioners in their decision of when to use that protocol. These limitations include lower revenue than alternative protocols, lying in non-private-value auctions, bidder collus ..."
Abstract
-
Cited by 65 (28 self)
- Add to MetaCart
The Vickrey auction has been widely advocated for multiagent systems. First we review its limitations so as to guide practitioners in their decision of when to use that protocol. These limitations include lower revenue than alternative protocols, lying in non-private-value auctions, bidder collusion, a lying auctioneer, and undesirable revelation of sensitive information. We discuss the special characteristics of Internet auctions: third party auction servers, cryptography, and how proxy agents relate to the revelation principle and fail to promote truth-telling.
CABOB: A Fast Optimal Algorithm for Winner Determination in Combinatorial Auctions
, 2005
"... Combinatorial auctions where bidders can bid on bundles of items can lead to more economically efficient allocations, but determining the winners is NP-complete and inapproximable. We present CABOB, a sophisticated optimal search algorithm for the problem. It uses decomposition techniques, upper and ..."
Abstract
-
Cited by 55 (6 self)
- Add to MetaCart
Combinatorial auctions where bidders can bid on bundles of items can lead to more economically efficient allocations, but determining the winners is NP-complete and inapproximable. We present CABOB, a sophisticated optimal search algorithm for the problem. It uses decomposition techniques, upper and lower bounding (also across components), elaborate and dynamically chosen bid-ordering heuristics, and a host of structural observations. CABOB attempts to capture structure in any instance without making assumptions about the instance distribution. Experiments against the fastest prior algorithm, CPLEX 8.0, show that CABOB is often faster, seldom drastically slower, and in many cases drastically faster—especially in cases with structure. CABOB’s search runs in linear space and has significantly better anytime performance than CPLEX. We also uncover interesting aspects of the problem itself. First, problems with short bids, which were hard for the first generation of specialized algorithms, are easy. Second, almost all of the CATS distributions are easy, and the run time is virtually unaffected by the number of goods. Third, we test several random restart strategies, showing that they do not help on this problem—the run-time distribution does not have a heavy tail.
Costly valuation computation in auctions
- IN IN PROCEEDINGS OF THE EIGHTH CONFERENCE OF THEORETICAL ASPECTS OF KNOWLEDGE AND RATIONALITY (TARK VIII), SIENNA
, 2001
"... We investigate deliberation and bidding strategies of agents with unlimited but costly computation who are participating in auctions. The agents do not a priori know their valuations for the items begin auctioned. Instead they devote computational resources to compute their valuations. We present a ..."
Abstract
-
Cited by 54 (26 self)
- Add to MetaCart
(Show Context)
We investigate deliberation and bidding strategies of agents with unlimited but costly computation who are participating in auctions. The agents do not a priori know their valuations for the items begin auctioned. Instead they devote computational resources to compute their valuations. We present a normative model of bounded rationality where deliberation actions of agents are incorporated into strategies and equilibria are analyzed for standard auction protocols. We show that even in settings such as English auctions where information about other agents ’ valuations is revealed for free by the bidding process, agents may still compute on opponents’ valuation problems, incurring a cost, in order to determine how to bid. We compare the costly computation model of bounded rationality with a different model where computation is free but limited. For some auction mechanisms the equilibrium strategies are substantially different. It can be concluded that the model of bounded rationality impacts the agents’ equilibrium strategies and must be considered when designing mechanisms for computationally limited agents.
Leveled Commitment Contracts and Strategic Breach
, 2001
"... In (automated) negotiation systems consisting of self-interested agents, contracts have traditionally been binding. Such contracts do not allow agents to capitalize on uncertain future events. Contingency contracts have been proposed to solve this problem. Contingency contracts are often impractical ..."
Abstract
-
Cited by 49 (7 self)
- Add to MetaCart
(Show Context)
In (automated) negotiation systems consisting of self-interested agents, contracts have traditionally been binding. Such contracts do not allow agents to capitalize on uncertain future events. Contingency contracts have been proposed to solve this problem. Contingency contracts are often impractical due to large numbers of interdependent and unanticipated future events on which to condition, and because some events are not mutually observable. We propose a leveled commitment contracting mechanism that allows agents to capitalize on uncertain future events byhaving the possibility of unilaterally decommitting from a contract based on local reasoning. Decommitment penalties are assigned to both agents in a contract: to be freed from the obligations of the contract, an agent only pays the penalty to the other party. One concern is that a self-interested agentwould be reluctant to decommit because there is a chance that the other party will decommit. In this case the former agen...
Expressive commerce and its application to sourcing: How we conducted $35 billion of generalized combinatorial auctions
"... Sourcing professionals buy several trillion dollars worth of goods and services yearly. We introduced a new paradigm called expressive commerce and applied it to sourcing. It combines the advantages of highly expressive human negotiation with the advantages of electronic reverse auctions. The idea i ..."
Abstract
-
Cited by 48 (7 self)
- Add to MetaCart
Sourcing professionals buy several trillion dollars worth of goods and services yearly. We introduced a new paradigm called expressive commerce and applied it to sourcing. It combines the advantages of highly expressive human negotiation with the advantages of electronic reverse auctions. The idea is that supply and demand are expressed in drastically greater detail than in traditional electronic auctions, and are algorithmically cleared. This creates a Pareto efficiency improvement in the allocation (a win-win between the buyer and the sellers) but the market clearing problem is a highly complex combinatorial optimization problem. We developed the world’s fastest tree search algorithms for solving it. We have hosted $35 billion of sourcing using the technology, and created $4.4 billion of hard-dollar savings plus numerous harder-to-quantify benefits. The suppliers also benefited by being able to express production efficiencies and creativity, and through exposure problem removal. Supply networks were redesigned, with quantitative understanding of the tradeoffs, and implemented in weeks instead of months.