Results 11  20
of
188
Parameterized graph separation problems
 In Proc. 1st IWPEC, volume 3162 of LNCS
, 2004
"... We consider parameterized problems where some separation property has to be achieved by deleting as few vertices as possible. The following five problems are studied: delete k vertices such that (a) each of the given ℓ terminals is separated from the others, (b) each of the given ℓ pairs of terminal ..."
Abstract

Cited by 51 (4 self)
 Add to MetaCart
(Show Context)
We consider parameterized problems where some separation property has to be achieved by deleting as few vertices as possible. The following five problems are studied: delete k vertices such that (a) each of the given ℓ terminals is separated from the others, (b) each of the given ℓ pairs of terminals is separated, (c) exactly ℓ vertices are cut away from the graph, (d) exactly ℓ connected vertices are cut away from the graph, (e) the graph is separated into at least ℓ components. We show that if both k and ℓ are
Correlation Clustering in General Weighted Graphs
 Theoretical Computer Science
, 2006
"... We consider the following general correlationclustering problem [1]: given a graph with real nonnegative edge weights and a 〈+〉/〈− 〉 edge labeling, partition the vertices into clusters to minimize the total weight of cut 〈+ 〉 edges and uncut 〈− 〉 edges. Thus, 〈+ 〉 edges with large weights (represen ..."
Abstract

Cited by 44 (0 self)
 Add to MetaCart
(Show Context)
We consider the following general correlationclustering problem [1]: given a graph with real nonnegative edge weights and a 〈+〉/〈− 〉 edge labeling, partition the vertices into clusters to minimize the total weight of cut 〈+ 〉 edges and uncut 〈− 〉 edges. Thus, 〈+ 〉 edges with large weights (representing strong correlations between endpoints) encourage those endpoints to belong to a common cluster while 〈− 〉 edges with large weights encourage the endpoints to belong to different clusters. In contrast to most clustering problems, correlation clustering specifies neither the desired number of clusters nor a distance threshold for clustering; both of these parameters are effectively chosen to be the best possible by the problem definition. Correlation clustering was introduced by Bansal, Blum, and Chawla [1], motivated by both document clustering and agnostic learning. They proved NPhardness and gave constantfactor approximation algorithms for the special case in which the graph is complete (full information) and every edge has the same weight. We give an O(log n)approximation algorithm for the general case based on a linearprogramming rounding and the “regiongrowing ” technique. We also prove that this linear program has a gap of Ω(log n), and therefore our approximation is tight under this approach. We also give an O(r 3)approximation algorithm for Kr,rminorfree graphs. On the other hand, we show that the problem is equivalent to minimum multicut, and therefore APXhard and difficult to approximate better than Θ(logn). 1
The Complexity of Soft Constraint Satisfaction
, 2006
"... Over the past few years there has been considerable progress in methods to systematically analyse the complexity of constraint satisfaction problems with specified constraint types. One very powerful theoretical development in this area links the complexity of a set of constraints to a corresponding ..."
Abstract

Cited by 44 (13 self)
 Add to MetaCart
Over the past few years there has been considerable progress in methods to systematically analyse the complexity of constraint satisfaction problems with specified constraint types. One very powerful theoretical development in this area links the complexity of a set of constraints to a corresponding set of algebraic operations, known as polymorphisms. In this paper we extend the analysis of complexity to the more general framework of combinatorial optimisation problems expressed using various forms of soft constraints. We launch a systematic investigation of the complexity of these problems by extending the notion of a polymorphism to a more general algebraic operation, which we call a multimorphism. We show that many tractable sets of soft constraints, both established and novel, can be characterised by the presence of particular multimorphisms. We also show that a simple set of NPhard constraints has very restricted multimorphisms. Finally, we use the notion of multimorphism to give a complete classification of complexity for the Boolean case which extends several earlier classification results for particular special cases.
A linear programming formulation and approximation algorithms for the metric labeling problem
 SIAM J. Discrete Math
"... We consider approximation algorithms for the metric labeling problem. This problem was introduced in a paper by Kleinberg and Tardos [J. ACM, 49 (2002), pp. 616–630] and captures many classification problems that arise in computer vision and related fields. They gave an O(log k log log k) approximat ..."
Abstract

Cited by 43 (1 self)
 Add to MetaCart
(Show Context)
We consider approximation algorithms for the metric labeling problem. This problem was introduced in a paper by Kleinberg and Tardos [J. ACM, 49 (2002), pp. 616–630] and captures many classification problems that arise in computer vision and related fields. They gave an O(log k log log k) approximation for the general case, where k is the number of labels, and a 2approximation for the uniform metric case. (In fact, the bound for general metrics can be improved to O(log k) by the work of Fakcheroenphol, Rao, and Talwar [Proceedings
On Achieving Maximum Multicast Throughput in Undirected Networks
 IEEE/ACM TRANS. NETWORKING
, 2006
"... The transmission of information within a data network is constrained by the network topology and link capacities. In this paper, we study the fundamental upper bound of information dissemination rates with these constraints in undirected networks, given the unique replicable and encodable propertie ..."
Abstract

Cited by 36 (5 self)
 Add to MetaCart
(Show Context)
The transmission of information within a data network is constrained by the network topology and link capacities. In this paper, we study the fundamental upper bound of information dissemination rates with these constraints in undirected networks, given the unique replicable and encodable properties of information flows. Based on recent advances in network coding and classical modeling techniques in flow networks, we provide a natural linear programming formulation of the maximum multicast rate problem. By applying Lagrangian relaxation on the primal and the dual linear programs (LPs), respectively, we derive a) a necessary and sufficient condition characterizing multicast rate feasibility, and b) an efficient and distributed subgradient algorithm for computing the maximum multicast rate. We also extend our discussions to multiple communication sessions, as well as to overlay and ad hoc network models. Both our theoretical and simulation results conclude that, network coding may not be instrumental to achieve better maximum multicast rates in most cases; rather, it facilitates the design of significantly more efficient algorithms to achieve such optimality.
Fixedparameter tractability of multicut parameterized by the size of the cutset
, 2011
"... Given an undirected graph G, a collection {(s1, t1),...,(sk, tk)} of pairs of vertices, and an integer p, the EDGE MULTICUT problem ask if there is a set S of at most p edges such that the removal of S disconnects every si from the corresponding ti. VERTEX MULTICUT is the analogous problem where S i ..."
Abstract

Cited by 33 (5 self)
 Add to MetaCart
Given an undirected graph G, a collection {(s1, t1),...,(sk, tk)} of pairs of vertices, and an integer p, the EDGE MULTICUT problem ask if there is a set S of at most p edges such that the removal of S disconnects every si from the corresponding ti. VERTEX MULTICUT is the analogous problem where S is a set of at most p vertices. Our main result is that both problems can be solved in time 2O(p3) · nO(1), i.e., fixedparameter tractable parameterized by the size p of the cutset in the solution. By contrast, it is unlikely that an algorithm with running time of the form f (p) · nO(1) exists for the directed version of the problem, as we show it to be W[1]hard parameterized by the size of the cutset.
A maximal tractable class of soft constraints
 J. Artif. Intell. Res
, 2004
"... Many optimization problems can be expressed using some form of soft constraints, where different measures of desirability arc associated with different combinations of domain values for specified subsets of variables. In this paper we identify a class of soft binary constraints for which the problem ..."
Abstract

Cited by 30 (12 self)
 Add to MetaCart
(Show Context)
Many optimization problems can be expressed using some form of soft constraints, where different measures of desirability arc associated with different combinations of domain values for specified subsets of variables. In this paper we identify a class of soft binary constraints for which the problem of finding the optimal solution is tractable. In other words, we show that for any given set of such constraints, there exists a polynomial time algorithm to determine the assignment having the best overall combined measure of desirability. This tractable class includes many commonlyoccurring soft constraints, such as &quot;as near as possible &quot; or &quot;as soon as possible after&quot;, as well as crisp constraints such as &quot;greater than ' 1. 1
Inapproximability of the Tutte polynomial
, 2008
"... The Tutte polynomial of a graph G is a twovariable polynomial T(G; x, y) that encodes many interesting properties of the graph. We study the complexity of the following problem, for rationals x and y: take as input a graph G, and output a value which is a good approximation to T(G; x, y). Jaeger, V ..."
Abstract

Cited by 29 (8 self)
 Add to MetaCart
(Show Context)
The Tutte polynomial of a graph G is a twovariable polynomial T(G; x, y) that encodes many interesting properties of the graph. We study the complexity of the following problem, for rationals x and y: take as input a graph G, and output a value which is a good approximation to T(G; x, y). Jaeger, Vertigan and Welsh have completely mapped the complexity of exactly computing the Tutte polynomial. They have shown that this is #Phard, except along the hyperbola (x − 1)(y − 1) = 1 and at four special points. We are interested in determining for which points (x, y) there is a fully polynomial randomised approximation scheme (FPRAS) for T(G; x, y). Under the assumption RP = NP, we prove that there is no FPRAS at (x, y) if (x, y) is is in one of the halfplanes x < −1 or y < −1 (excluding the easytocompute cases mentioned above). Two exceptions to this result are the halfline x < −1, y = 1 (which is still open) and the portion of the hyperbola (x − 1)(y − 1) = 2 corresponding to y < −1 which we show
Approximate Inference in Graphical Models using LP Relaxations
, 2010
"... Graphical models such as Markov random fields have been successfully applied to a wide variety of fields, from computer vision and natural language processing, to computational biology. Exact probabilistic inference is generally intractable in complex models having many dependencies between the vari ..."
Abstract

Cited by 27 (1 self)
 Add to MetaCart
(Show Context)
Graphical models such as Markov random fields have been successfully applied to a wide variety of fields, from computer vision and natural language processing, to computational biology. Exact probabilistic inference is generally intractable in complex models having many dependencies between the variables. We present new approaches to approximate inference based on linear programming (LP) relaxations. Our algorithms optimize over the cycle relaxation of the marginal polytope, which we show to be closely related to the first lifting of the SheraliAdams hierarchy, and is significantly tighter than the pairwise LP relaxation. We show how to efficiently optimize over the cycle relaxation using a cuttingplane algorithm that iteratively introduces constraints into the relaxation. We provide a criterion to determine which constraints would be most helpful in tightening the relaxation, and give efficient algorithms for solving the search problem of finding the best cycle constraint to add according to this criterion.
An analysis of convex relaxations for MAP estimation of discrete MRFs
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2008
"... The problem of obtaining the maximum a posteriori estimate of a general discrete Markov random field (i.e., a Markov random field defined using a discrete set of labels) is known to be NPhard. However, due to its central importance in many applications, several approximation algorithms have been pr ..."
Abstract

Cited by 27 (1 self)
 Add to MetaCart
The problem of obtaining the maximum a posteriori estimate of a general discrete Markov random field (i.e., a Markov random field defined using a discrete set of labels) is known to be NPhard. However, due to its central importance in many applications, several approximation algorithms have been proposed in the literature. In this paper, we present an analysis of three such algorithms based on convex relaxations: (i) LPS: the linear programming (LP) relaxation proposed by Schlesinger (1976) for a special case and independently in Chekuri et al. (2001), Koster et al. (1998), and Wainwright et al. (2005) for the general case; (ii) QPRL: the quadratic programming (QP) relaxation of Ravikumar and Lafferty (2006); and (iii) SOCPMS: the second order cone programming (SOCP) relaxation first proposed by Muramatsu and Suzuki (2003) for two label problems and later extended by Kumar et al. (2006) for a general label set. We show that the SOCPMS and the QPRL relaxations are equivalent. Furthermore, we prove that despite the flexibility in the form of the constraints/objective function offered by QP and SOCP, the LPS relaxation strictly dominates (i.e., provides a better approximation than) QPRL and SOCPMS. We generalize these results by defining a large class of SOCP (and equivalent QP) relaxations