Results 1 
8 of
8
Optimal Coalition Structures in Graph Games
, 2011
"... Abstract. We consider the problem of finding the optimal coalition structure in Weighted Graph Games (WGGs), a simple restricted representation of coalitional games [12]. The agents in such games are vertices of the graph, and the value of a coalition is the sum of the weights of the edges present b ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Abstract. We consider the problem of finding the optimal coalition structure in Weighted Graph Games (WGGs), a simple restricted representation of coalitional games [12]. The agents in such games are vertices of the graph, and the value of a coalition is the sum of the weights of the edges present between coalition members. The optimal coalition structure is a partition of the agents to coalitions, that maximizes the sum of utilities obtained by the coalitions. We show that finding the optimal coalition structure is not only hard for general graphs, but is also intractable for restricted families such as planar graphs which are amenable for many other combinatorial problems. We then provide algorithms with constant factor approximations for planar, minorfree and bounded degree graphs. 1
As Strong as the Weakest Link: Mining Diverse Cliques in Weighted Graphs
"... Abstract. Mining for cliques in networks provides an essential tool for the discovery of strong associations among entities. Applications vary, from extracting core subgroups in team performance data arising in sports, entertainment, research and business; to the discovery of functional complexes in ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Mining for cliques in networks provides an essential tool for the discovery of strong associations among entities. Applications vary, from extracting core subgroups in team performance data arising in sports, entertainment, research and business; to the discovery of functional complexes in highthroughput gene interaction data. A challenge in all of these scenarios is the large size of realworld networks and the computational complexity associated with clique enumeration. Furthermore, when mining for multiple cliques within the same network, the results need to be diversified in order to extract meaningful information that is both comprehensive and representative of the whole dataset. We formalize the problem of weighted diverse clique mining (mDkC) in large networks, incorporating both individual clique strength (measured by its weakest link) and diversity of the cliques in the result set. We show that the problem is NPhard due to the diversity requirement. However, our formulation is submodular, and hence can be approximated within a constant factor from the optimal. We propose algorithms for mDkC that exploit the edge weight distribution in the input network and produce performance gains of more than 3 orders of magnitude compared to an exhaustive solution. One of our algorithms, Diverse Cliques ( DiCliQ), guarantees a constant factor approximation while the other, Bottom Up Diverse Cliques ( BUDiC), scales to large and dense networks without compromising the solution quality. We evaluate both algorithms on 5 realworld networks of different genres and demonstrate their utility for discovery of gene complexes and effective collaboration subgroups in sports and entertainment. 1
On Allocations with Negative Externalities
"... Abstract. We consider the problem of a monopolist seller who wants to sell some items to a set of buyers. The buyers are strategic, unitdemand, and connected by a social network. Furthermore, the utility of a buyer is a decreasing function of the number of neighbors who do not own the item. In othe ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. We consider the problem of a monopolist seller who wants to sell some items to a set of buyers. The buyers are strategic, unitdemand, and connected by a social network. Furthermore, the utility of a buyer is a decreasing function of the number of neighbors who do not own the item. In other words, they exhibit negative externalities, deriving utility from being unique in their purchases. In this model, any fixed setting of the price induces a subgame on the buyers. We show that it is an exact potential game which admits multiple pure Nash Equilibria. A natural problem is to compute those pure Nash equilibria that raise the most and least revenue for the seller. These correspond respectively to the most optimistic and most pessimistic revenues that can be raised. We show that the revenues of both the best and worst equilibria are hard to approximate within subpolynomial factors. Given this hardness, we consider a relaxed notion of pricing, where the price for the same item can vary within a constant factor for different buyers. We show a 4approximation to the pessimistic revenue when the prices are relaxed by a factor of 4. The interesting aspect of this algorithm is that it uses a linear programming relaxation that only encodes part of the strategic behavior of the buyers in its constraints, and rounds this relaxation to obtain a starting configuration for performing relaxed Nash dynamics. Finally, for the maximum revenue Nash equilibrium, we show a 2approximation for bipartite graphs (without price relaxation), and complement this result by showing that the problem is NPHard even on trees. 1
Ad Allocation for Browse Sessions
"... A user’s session of information need often goes well beyond his search query and first click on the search result page and therefore is characterized by both search and browse activities on the web. Such a session can be effectively represented by the browse graph over the nodes visited by the user ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
A user’s session of information need often goes well beyond his search query and first click on the search result page and therefore is characterized by both search and browse activities on the web. Such a session can be effectively represented by the browse graph over the nodes visited by the user in the session. Thus as the user transitions between pages in the browse graph, the effectiveness of ads (clicktoconversion ratio) he sees on these pages could change. On the other hand, the advertisers valuation for a user also depends upon past events in the browse session (a.k.a. an externality), e.g. a shoe company may value a user differently in a browse session if he has not been shown any other shoe ads. In another instance, his valuation may be concave in the number of times the ad is shown to the user in the same session. We note that the advertiser’s valuation is derived from the conversion that a click might lead to. Often, this is not correlated with the click for which the advertiser typically pays. The first contribution of our study is to show that the click to conversion ratio (CtoC) of a user depends on the past events in the session. To this end, we analyze logs of user activity over a period of one month from Microsoft AdCenter Delivery Engine to identify the source, nature and the extent of externality present in the CtoC ratio as a function of past events. Specifically, we address externalities from past exposure of the user to self and competing advertisers. We then propose a new bidding language that allows the advertiser to specify his valuation of a user’s click as a function of these externalities. We show the hardness of computing an optimal ad allocation in this setting and give efficient algorithms under some practical assumptions. Finally, we conduct an extensive empirical analysis on real data to measure effectiveness of our proposed allocation schemes using the Bing AdCenter delivery engine logs. Work done while the author was an intern at Microsoft
Approximation Algorithms and Hardness Results for Shortest Path Based Graph Orientations
"... The graph orientation problem calls for orienting the edges of an undirected graph so as to maximize the number of prespecified sourcetarget vertex pairs that admit a directed path from the source to the target. Most algorithmic approaches to this problem share a common preprocessing step, in whi ..."
Abstract
 Add to MetaCart
(Show Context)
The graph orientation problem calls for orienting the edges of an undirected graph so as to maximize the number of prespecified sourcetarget vertex pairs that admit a directed path from the source to the target. Most algorithmic approaches to this problem share a common preprocessing step, in which the input graph is reduced to a tree by repeatedly contracting its cycles. While this reduction is valid from an algorithmic perspective, the assignment of directions to the edges of the contracted cycles becomes arbitrary, and the connecting sourcetarget paths may be arbitrarily long. In the context of biological networks, the connection of vertex pairs via shortest paths is highly motivated, leading to the following variant: Given an undirected graph and a collection of sourcetarget vertex pairs, assign directions to the edges so as to maximize the number of pairs that are connected by a shortest (in the original graph) directed path. Here we study this variant, provide strong inapproximability results for it and propose an approximation algorithm for the problem, as well as for relaxations of it where the connecting paths need only be approximately shortest.
Payment Rules for Combinatorial Auctions via Structural Support Vector Machines
, 2011
"... Given an optimal solution to the winner determination problem of a combinatorial auction, standard approaches provide exact incentive compatibility. Even here, significant economic concerns typically preclude these approaches. For large combinatorial auction problems, however, winner determination c ..."
Abstract
 Add to MetaCart
(Show Context)
Given an optimal solution to the winner determination problem of a combinatorial auction, standard approaches provide exact incentive compatibility. Even here, significant economic concerns typically preclude these approaches. For large combinatorial auction problems, however, winner determination can only be solved approximately due to its high computational complexity, and the design of appropriate payment rules for suboptimal winner determination remains a significant open problem. In this paper, we advocate the use of structural support vector machines to solve this pricing problem. The output of a winner determination algorithm, i.e., the allocation rule, is viewed as training data for a classification problem with distinct classes, each corresponding to the different bundles that can be allocated to an agent. The decision boundaries of a trained classifier are then used to construct a payment rule. An exact classifier produces a payment rule that together with the allocation rule yields a dominantstrategy incentive compatible mechanism. Moreover, minimizing regularized empirical error in training corresponds to minimizing a regularized upper bound on ex post regret for truthful bidding, allowing the approach to extend to nonimplementable allocation rules. 1
QuasiRandom PCP and Hardness of 2Catalog Segmentation
, 2009
"... We study the problem of 2Catalog Segmentation which is one of the several variants of segmentation problems, introduced by Kleinberg et al. [KPR98], that naturally arise in data mining applications. Formally, given a bipartite graph G = (U, V, E) and parameter r, the goal is to output two subsets V ..."
Abstract
 Add to MetaCart
We study the problem of 2Catalog Segmentation which is one of the several variants of segmentation problems, introduced by Kleinberg et al. [KPR98], that naturally arise in data mining applications. Formally, given a bipartite graph G = (U, V, E) and parameter r, the goal is to output two subsets V1, V2 ⊆ V, each of size r, to maximize, max{E(u, V1), E(u, V2)}, u∈U where E(u, Vi) is the set of edges between u and the vertices in Vi for i = 1, 2. There is a simple 2approximation for this problem, and stronger approximation factors are known for the special case when r = V /2 [DGK99, YC05]. On the other hand, it is known to be NPhard [KPR98, DGK99, Mit04], and Feige [Fei02] showed a constant factor hardness based on an assumption of average case hardness of random 3SAT. In this paper we show that there is no PTAS for 2Catalog Segmentation assuming that NP does not have subexponential time probabilistic algorithms, i.e. NP ̸ ⊆ ∩ε>0 BPTIME(2nε). In order to prove our result we strengthen the analysis of the QuasiRandom PCP of Khot [Kho06], which we transform