Results 11  20
of
62
Truthful assignment without money
 In Proceedings of the 11th ACM Conference on Electronic Commerce (EC
, 2010
"... We study the design of truthful mechanisms that do not use payments for the generalized assignment problem (GAP) and its variants. An instance of the GAP consists of a bipartite graph with jobs on one side and machines on the other. Machines have capacities and edges have values and sizes; the goal ..."
Abstract

Cited by 26 (0 self)
 Add to MetaCart
(Show Context)
We study the design of truthful mechanisms that do not use payments for the generalized assignment problem (GAP) and its variants. An instance of the GAP consists of a bipartite graph with jobs on one side and machines on the other. Machines have capacities and edges have values and sizes; the goal is to construct a welfare maximizing feasible assignment. In our model of private valuations, motivated by impossibility results, the value and sizes on all jobmachine pairs are public information; however, whether an edge exists or not in the bipartite graph is a job’s private information. That is, the selfish agents in our model are the jobs, and their private information is their edge set. We want to design mechanisms that are truthful without money (henceforth strategyproof), and produce assignments whose welfare
Efficient Resource Allocation with Flexible Channel Cooperation in OFDMA Cognitive Radio Networks
"... Abstract—Recently, a cooperative paradigm for singlechannel cognitive radio networks has been advocated, where primary users can leverage secondary users to relay their traffic. However, it is not clear how such cooperation can be exploited in multichannel networks effectively. Conventional cooper ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
(Show Context)
Abstract—Recently, a cooperative paradigm for singlechannel cognitive radio networks has been advocated, where primary users can leverage secondary users to relay their traffic. However, it is not clear how such cooperation can be exploited in multichannel networks effectively. Conventional cooperation entails that data on one channel has to be relayed on exactly the same channel, which is inefficient in multichannel networks with channel and user diversity. Moreover, the selfishness of users complicates the critical resource allocation problem, as both parties target at maximizing their own utility. This work represents the first attempt to address these challenges. We propose FLEC, a novel design of flexible channel cooperation. It allows secondary users to freely optimize the use of channels for transmitting primary data along with their own data, in order to maximize performance. Further, we formulate a unifying optimization framework based on Nash Bargaining Solutions to fairly and efficiently address resource allocation between primary and secondary networks, in both decentralized and centralized settings. We present an optimal distributed algorithm and suboptimal centralized heuristics, and verify their effectiveness via realistic simulations.
The Generalized Maximum Coverage Problem
"... We define a new problem called the Generalized Maximum Coverage Problem (GMC). GMC is an extension of the Budgeted Maximum Coverage Problem, and it has important applications in wireless OFDMA scheduling. We use a variation of the greedy algorithm to produce a ( 2e−1 e−1 +ɛ)approximation for every ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
(Show Context)
We define a new problem called the Generalized Maximum Coverage Problem (GMC). GMC is an extension of the Budgeted Maximum Coverage Problem, and it has important applications in wireless OFDMA scheduling. We use a variation of the greedy algorithm to produce a ( 2e−1 e−1 +ɛ)approximation for every ɛ> 0, and then use partial enumeration to reduce + ɛ. the approximation ratio to e e−1 1
Uncoordinated TwoSided Matching Markets
 EC'08
, 2008
"... Various economic interactions can be modeled as twosided markets. A central solution concept to these markets are stable matchings, introduced by Gale and Shapley. It is well known that stable matchings can be computed in polynomial time, but many reallife markets lack a central authority to match ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
(Show Context)
Various economic interactions can be modeled as twosided markets. A central solution concept to these markets are stable matchings, introduced by Gale and Shapley. It is well known that stable matchings can be computed in polynomial time, but many reallife markets lack a central authority to match agents. In those markets, matchings are formed by actions of selfinterested agents. Knuth introduced uncoordinated twosided markets and showed that the uncoordinated better response dynamics may cycle. However, Roth and Vande Vate showed that the random better response dynamics converges to a stable matching with probability one, but did not address the question of convergence time. In this paper, we give an exponential lower bound for the convergence time of the random better response dynamics in twosided markets. We also extend the results for the better response dynamics to the best response dynamics, i.e., we present a cycle of best responses, and prove that the random best response dynamics converges to a stable matching with probability one, but its convergence time is exponential. Additionally, we identify the special class of correlated matroid twosided markets with reallife applications for which we prove that the random best response dynamics converges in expected polynomial time.
Improved Approximation Algorithms for Budgeted Allocations
"... Abstract. We provide a 3/2approximation algorithm for an offline budgeted allocations problem, an improvement over the e/(e − 1) approximation of Andelman and Mansour [1] and the e/(e − 1) − ɛ approximation (for ɛ ≈ 0.0001) of Feige and Vondrak [5] for the more general Maximum Submodular Welfare ( ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We provide a 3/2approximation algorithm for an offline budgeted allocations problem, an improvement over the e/(e − 1) approximation of Andelman and Mansour [1] and the e/(e − 1) − ɛ approximation (for ɛ ≈ 0.0001) of Feige and Vondrak [5] for the more general Maximum Submodular Welfare (SMW) problem. For a special case of our problem, we improve this ratio to √ 2. Finally, we prove that it is APXhard. The problem we study has applications to sponsored search auctions. 1
A Combinatorial Allocation Mechanism With Penalties For Banner Advertising
, 2008
"... Most current banner advertising is sold through negotiation thereby incurring large transaction costs and possibly suboptimal allocations. We propose a new automated system for selling banner advertising. In this system, each advertiser specifies a collection of host webpages which are relevant to h ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
Most current banner advertising is sold through negotiation thereby incurring large transaction costs and possibly suboptimal allocations. We propose a new automated system for selling banner advertising. In this system, each advertiser specifies a collection of host webpages which are relevant to his product, a desired total quantity of impressions on these pages, and a maximum perimpression price. The system selects a subset of advertisers as winners and maps each winner to a set of impressions on pages within his desired collection. The distinguishing feature of our system as opposed to current combinatorial allocation mechanisms is that, mimicking the current negotiation system, we guarantee that winners receive at least as many advertising opportunities as they requested or else receive ample compensation in the form of a monetary payment by the host. Such guarantees are essential in markets like banner advertising where a major goal of the advertising campaign is developing brand recognition. As we show, the problem of selecting a feasible subset of advertisers with maximum total value is inapproximable. We thus present two greedy heuristics and discuss theoretical techniques to measure their performances. Our first algorithm iteratively selects advertisers and corresponding sets of impressions which contribute maximum marginal per impression profit to the current solution. We prove a bicriteria approximation for this algorithm, showing that it generates approximately as much value as the optimum algorithm on a slightly harder problem. However, this algorithm might perform poorly on instances in which the value of the optimum solution is quite large, a clearly undesirable failure mode. Hence, we present an adaptive greedy algorithm which again iteratively selects advertisers with maximum marginal perimpression profit, but additionally reassigns impressions at each iteration. For this algorithm, we prove a structural approximation result, a newly defined framework for evaluating heuristics [10]. We thereby prove that this algorithm has a better performance guarantee than the simple greedy algorithm.
SharingAware Algorithms for Virtual Machine Colocation
"... Virtualization technology enables multiple virtual machines (VMs) to run on a single physical server. VMs that run on the same physical server can share memory pages that have identical content, thereby reducing the overall memory requirements on the server. We develop sharingaware algorithms that ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
(Show Context)
Virtualization technology enables multiple virtual machines (VMs) to run on a single physical server. VMs that run on the same physical server can share memory pages that have identical content, thereby reducing the overall memory requirements on the server. We develop sharingaware algorithms that can colocate VMs with similar page content on the same physical server to optimize the benefits of interVM sharing. We show that interVM sharing occurs in a largely hierarchical fashion, where the sharing can be attributed to VM’s running the same OS platform, OS version, software libraries, or applications. We propose two hierarchical sharing models: a tree model and a more general clustertree model. Using a set of VM traces, we show that up to 67 % percent of the interVM sharing is captured by the tree model and up to 82 % is captured by the clustertree model. Next, we study two problem variants of critical interest to a virtualization service provider: the VM Maximization problem that determines the most profitable subset of the VMs that can be packed into the given set of servers, and the VM packing problem that determines the smallest set of servers that can accommodate a set of VMs. While both variants are NPhard, we show that both admit provably good approximation schemes in the hierarchical sharing models. We show that VM maximization for the tree and clustertree models can be approximated in polytime to within a (1 − 1) factor of optimal. Further, we show that e VM packing can be approximated in polytime to within a factor of O(log n) of optimal for clustertrees and to within a factor of 3 of optimal for trees, where n is the number of VMs. Finally, we evaluate our VM packing algorithm for the tree sharing model on realworld VM traces and show that our algorithm can exploit most of the available interVM sharing to achieve a 32 % to 50 % reduction in servers and a 25 % to 57 % reduction in memory footprint compared to sharingoblivious algorithms.
On exploiting diversity and spatial reuse in relayenabled wireless networks,” in MobiHoc ’08
 Proc. of the 9th ACM international symposium on Mobile ad hoc networking and computing
, 2008
"... Relayenabled wireless networks (eg. WIMAX 802.16j) represent an emerging trend for the incorporation of multihop networking solutions for lastmile broadband access in next generation wireless networks. The adoption of more sophisticated access technologies such as OFDM (orthogonal frequency divis ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
(Show Context)
Relayenabled wireless networks (eg. WIMAX 802.16j) represent an emerging trend for the incorporation of multihop networking solutions for lastmile broadband access in next generation wireless networks. The adoption of more sophisticated access technologies such as OFDM (orthogonal frequency division multiplexing) coupled with the relayinduced twohop nature, provides two key benefits to these networks in the form of diversity and spatial reuse gains. However, leveraging these benefits calls for more sophisticated solutions, among which, user scheduling forms a key component. We consider the specific problem of scheduling users with finite buffers on the multiple OFDM carriers (channels) over the two hops of the relayenabled network. We propose scheduling algorithms that help leverage diversity and spatial reuse gains from these networks. We show that even the scheduling problem to exploit diversity gains alone is NPhard and provide both theoretically and practically efficient polynomialtime algorithms with approximation guarantees. Building on the diversity solutions, we also propose an efficient polynomialtime scheduling algorithm for exploiting both spatial reuse as well as diversity. The proposed solutions are evaluated to highlight the relative significance of diversity and spatial reuse gains with respect to varying network conditions.
The assignment problem in content distribution networks: Unsplittable hardcapacitated facility location
 in Proc. of ACMSIAM SODA
, 2009
"... In a Content Distribution Network (CDN), there are m servers storing the data; each of them has a specific bandwidth. All the requests from a particular client should be assigned to one server, because of the routing protocol used. The goal is to minimize the total cost of these assignments —cost of ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
(Show Context)
In a Content Distribution Network (CDN), there are m servers storing the data; each of them has a specific bandwidth. All the requests from a particular client should be assigned to one server, because of the routing protocol used. The goal is to minimize the total cost of these assignments —cost of each is proportional to the distance as well as the request size — while the load on each server is kept below its bandwidth limit. When each server also has a setup cost, this is an unsplittable hardcapacitated facility location problem. As much attention as facility location problems have received, there has been no nontrivial approximation algorithm when we have hard capacities (i.e., there can only be one copy of each facility whose capacity cannot be violated) and demands are unsplittable (i.e., all the demand from a client has to be assigned to a single facility). We observe it is NPhard to approximate the cost to within any bounded factor. Thus, for an arbitrary constant ɛ> 0, we relax the capacities to a 1 + ɛ factor. For the case where capacities are almost uniform, we give a bicriteria O(log n, 1+ɛ)approximation algorithm for general metrics and a (1 + ɛ, 1 + ɛ)approximation algorithm for tree metrics. A bicriteria (α, β)approximation algorithm produces a solution of cost at most α times the optimum, while violating the capacities by no more than a β factor. We can get the same guarantee for nonuniform capacities if we allow quasipolynomial running time. In our algorithm, some clients guess the facility they are assigned to, and facilities decide the size of the clients they serve. A straightforward approach results in exponential running time. When costs do not satisfy metricity, we show that a 1.5 violation of capacities is necessary to obtain any approximation. It is worth noting that our results generalize bin packing (zero cost matrix and facility costs equal to one), knapsack (single facility with all costs being zero), minimum makespan scheduling for related machines (all costs being zero) and some facility location problems. Key words: approximation algorithm, PTAS, network, facility location