• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations | Disambiguate

Lagrangian relaxation based algorithms for convex programming problems (2004)

by R Khandekar
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 21
Next 10 →

The multiplicative weights update method: a meta algorithm and applications

by Sanjeev Arora, Elad Hazan, Satyen Kale , 2005
"... Algorithms in varied fields use the idea of maintaining a distribution over a certain set and use the multiplicative update rule to iteratively change these weights. Their analysis are usually very similar and rely on an exponential potential function. We present a simple meta algorithm that unifies ..."
Abstract - Cited by 147 (13 self) - Add to MetaCart
Algorithms in varied fields use the idea of maintaining a distribution over a certain set and use the multiplicative update rule to iteratively change these weights. Their analysis are usually very similar and rely on an exponential potential function. We present a simple meta algorithm that unifies these disparate algorithms and drives them as simple instantiations of the meta algorithm. 1
(Show Context)

Citation Context

...a potential function argument and the final running time is proportional to 1/ɛ 2 . It has been clear to most researchers that these results are very similar, see for instance, Khandekar’s PhD thesis =-=[Kha04]-=-. Here we point out that these are all instances of the same (more general) algorithm. This meta ∗ This project supported by David and Lucile Packard Fellowship and NSF grant CCR0205594. Address for a...

Computing correlated equilibria in Multi-Player Games

by Christos H. Papadimitriou - STOC'05 , 2005
"... We develop a polynomial-time algorithm for finding correlated equilibria (a well-studied notion of rationality due to Aumann that generalizes the Nash equilibrium) in a broad class of succinctly representable multiplayer games, encompassing essentially all known kinds, including all graphical games, ..."
Abstract - Cited by 96 (6 self) - Add to MetaCart
We develop a polynomial-time algorithm for finding correlated equilibria (a well-studied notion of rationality due to Aumann that generalizes the Nash equilibrium) in a broad class of succinctly representable multiplayer games, encompassing essentially all known kinds, including all graphical games, polymatrix games, congestion games, scheduling games, local effect games, as well as several generalizations. Our algorithm is based on a variant of the existence proof due to Hart and Schmeidler [11], and employs linear programming duality, the ellipsoid algorithm, Markov chain steady state computations, as well as application-specific methods for computing multivariate expectations.

Optimal Hierarchical Decompositions for Congestion Minimization in Networks

by Harald Räcke , 2008
"... Hierarchical graph decompositions play an important role in the design of approximation and online algorithms for graph problems. This is mainly due to the fact that the results concerning the approximation of metric spaces by tree metrics (e.g. [10, 11, 14, 16]) depend on hierarchical graph decompo ..."
Abstract - Cited by 58 (1 self) - Add to MetaCart
Hierarchical graph decompositions play an important role in the design of approximation and online algorithms for graph problems. This is mainly due to the fact that the results concerning the approximation of metric spaces by tree metrics (e.g. [10, 11, 14, 16]) depend on hierarchical graph decompositions. In this line of work a probability distribution over tree graphs is constructed from a given input graph, in such a way that the tree distances closely resemble the distances in the original graph. This allows it, to solve many problems with a distance-based cost function on trees, and then transfer the tree solution to general undirected graphs with only a logarithmic loss in the performance guarantee. The results about oblivious routing [30, 22] in general undirected graphs are based on hierarchical decompositions of a different type in the sense that they are aiming to approximate the bottlenecks in the network (instead of the point-to-point distances). We call such decompositions cutbased decompositions. It has been shown that they also can be used to design approximation and online algorithms for a wide variety of different problems, but at the current state of the art the performance guarantee goes down by an O(log 2 n log log n)-factor when making the transition from tree networks to general graphs. In this paper we show how to construct cut-based decompositions that only result in a logarithmic loss in performance, which is asymptotically optimal. Remarkably, one major ingredient of our proof is a distance-based decomposition scheme due to Fakcharoenphol, Rao and Talwar [16]. This shows an interesting relationship between these seemingly different decomposition techniques. The main applications of the new decomposition are an optimal O(log n)-competitive algorithm for oblivious routing in general undirected graphs, and an O(log n)-approximation for Minimum Bisection, which improves the O(log 1.5 n) approximation

Efficient Algorithms Using The Multiplicative Weights Update Method

by Satyen Kale , 2006
"... Abstract Algorithms based on convex optimization, especially linear and semidefinite programming, are ubiquitous in Computer Science. While there are polynomial time algorithms known to solve such problems, quite often the running time of these algorithms is very high. Designing simpler and more eff ..."
Abstract - Cited by 28 (1 self) - Add to MetaCart
Abstract Algorithms based on convex optimization, especially linear and semidefinite programming, are ubiquitous in Computer Science. While there are polynomial time algorithms known to solve such problems, quite often the running time of these algorithms is very high. Designing simpler and more efficient algorithms is important for practical impact. In this thesis, we explore applications of the Multiplicative Weights method in the design of efficient algorithms for various optimization problems. This method, which was repeatedly discovered in quite diverse fields, is an algorithmic technique which maintains a distribution on a certain set of interest, and updates it iteratively by multiplying the probability mass of elements by suitably chosen factors based on feedback obtained by running another algorithm on the distribution. We present a single meta-algorithm which unifies all known applications of this method in a common framework. Next, we generalize the method to the setting of symmetric matrices rather than real numbers. We derive the following applications of the resulting Matrix Multiplicative Weights algorithm: 1. The first truly general, combinatorial, primal-dual method for designing efficient algorithms for semidefinite programming. Using these techniques, we obtain significantly faster algorithms for obtaining O(plog n) approximations to various graph partitioning problems, such as Sparsest Cut, Balanced Separator in both directed and undirected weighted graphs, and constraint satisfaction problems such as Min UnCut and Min 2CNF Deletion. 2. An ~O(n3) time derandomization of the Alon-Roichman construction of expanders using Cayley graphs. The algorithm yields a set of O(log n) elements which generates an expanding Cayley graph in any group of n elements. 3. An ~O(n3) time deterministic O(log n) approximation algorithm for the quantum hypergraph covering problem. 4. An alternative proof of a result of Aaronson that the fl-fat-shattering dimension of quantum states on n qubits is O ( nfl2).
(Show Context)

Citation Context

...a potential function argument and the final running time is proportional to 1/ε 2 . It has been clear to most researchers that these results are very similar, see for instance, Khandekar’s PhD thesis =-=[64]-=-. In this chapter, we develop a unified framework for all these algorithms. This meta algorithm is a generalization of Littlestone and Warmuth’s Weighted Majority algorithm from learning theory [78]. ...

Approximation schemes for packing with item fragmentation. Theory Comput

by Hadas Shachnai, Tami Tamir, Omer Yehezkely - Syst
"... We consider two variants of the classical bin packing problem in which items may be fragmented. This can potentially reduce the total number of bins needed for packing the instance. However, since fragmentation incurs overhead, we attempt to avoid it as much as possible. In bin packing with size inc ..."
Abstract - Cited by 13 (3 self) - Add to MetaCart
We consider two variants of the classical bin packing problem in which items may be fragmented. This can potentially reduce the total number of bins needed for packing the instance. However, since fragmentation incurs overhead, we attempt to avoid it as much as possible. In bin packing with size increasing fragmentation (BP-SIF), fragmenting an item increases the input size (due to a header/footer of fixed size that is added to each fragment). In bin packing with size preserving fragmentation (BP-SPF), there is a bound on the total number of fragmented items. These two variants of bin packing capture many practical scenarios, including message transmission in community TV networks, VLSI circuit design and preemptive scheduling on parallel machines with setup times/setup costs. While both BP-SPF and BP-SIF do not belong to the class of problems that admit a polynomial time approximation scheme (PTAS), we show in this paper that both problems admit a dual PTAS and an asymptotic PTAS. We also develop for each of the problems a dual asymptotic fully polynomial time approximation scheme (AFPTAS). Our AFPTASs are based on a non-standard transformation of the mixed packing and covering linear program formulations of our problems into pure covering programs, which enables to efficiently solve these programs.

Efficient algorithms for online convex optimization and their applications

by Elad Hazan , 2006
"... ..."
Abstract - Cited by 12 (2 self) - Add to MetaCart
Abstract not found

Vertex sparsifiers: New results from old techniques

by Matthias Englert, Anupam Gupta, Robert Krauthgamer, Harald Räcke, Inbal Talgam-cohen, Kunal Talwar - IN 13TH INTERNATIONAL WORKSHOP ON APPROXIMATION, RANDOMIZATION, AND COMBINATORIAL OPTIMIZATION, VOLUME 6302 OF LECTURE NOTES IN COMPUTER SCIENCE , 2010
"... Given a capacitated graph G = (V, E) and a set of terminals K ⊆ V, how should we produce a graph H only on the terminals K so that every (multicommodity) flow between the terminals in G could be supported in H with low congestion, and vice versa? (Such a graph H is called a flow-sparsifier for G.) ..."
Abstract - Cited by 12 (5 self) - Add to MetaCart
Given a capacitated graph G = (V, E) and a set of terminals K ⊆ V, how should we produce a graph H only on the terminals K so that every (multicommodity) flow between the terminals in G could be supported in H with low congestion, and vice versa? (Such a graph H is called a flow-sparsifier for G.) What if we want H to be a “simple ” graph? What if we allow H to be a convex combination of simple graphs? Improving on results of Moitra [FOCS 2009] and Leighton and Moitra [STOC 2010], we give efficient algorithms for constructing: (a) a flow-sparsifier H that log k log log k maintains congestion up to a factor of O (), where k = |K|. (b) a convex combination of trees over the terminals K that maintains congestion up to a factor of O(log k). (c) for a planar graph G, a convex combination of planar graphs that maintains congestion up to a constant factor. This requires us to give a new algorithm for the 0-extension problem, the first one in which the preimages of each terminal are connected in G. Moreover, this result extends to minor-closed families of graphs. Our bounds immediately imply improved approximation guarantees for several terminal-based cut and ordering problems.
(Show Context)

Citation Context

...e used to obtain this algorithmic version of the transfer theorem. Merely for completeness, in the following we show how to derive the algorithmic result from a special case of a theorem by Khandekar =-=[21]-=-. Theorem 6 (see [21, Theorem 5.1.6]). Let P ⊆ Rd be a nonempty convex set for some d, and for each e ∈ E, let fe : P → R≥0 be a nonnegative continuous convex function. Suppose we have an oracle that,...

Fractional covering with upper bounds on the variables: solving LPs with negative entries

by Naveen Garg, Rohit Khandekar - IN PROC. 12TH EUROPEAN SYMPOSIUM ON ALGORITHMS (ESA), LNCS 3321 , 2004
"... We present a Lagrangian relaxation technique to solve a class of linear programs with negative coefficients in the objective function and the constraints. We apply this technique to solve (the dual of) covering linear programs with upper bounds on the variables: min{c ⊤ x | Ax ≥ b, x ≤ u, x ≥ 0} wh ..."
Abstract - Cited by 6 (1 self) - Add to MetaCart
We present a Lagrangian relaxation technique to solve a class of linear programs with negative coefficients in the objective function and the constraints. We apply this technique to solve (the dual of) covering linear programs with upper bounds on the variables: min{c ⊤ x | Ax ≥ b, x ≤ u, x ≥ 0} where c, u ∈ R m +, b ∈ R n + and A ∈ R n×m + have non-negative entries. We obtain a strictly feasible, (1 + ɛ)-approximate solution by making O(mɛ −2 log m + min{n, log log C}) calls to an oracle that finds the most-violated constraint. Here C is the largest entry in c or u, m is the number of variables, and n is the number of covering constraints. Our algorithm follows naturally from the algorithm for the fractional packing problem and improves the previous best bound of O(mɛ −2 log(mC)) given by Fleischer [1]. Also for a fixed ɛ, if the number of covering constraints is polynomial, our algorithm makes a number of oracle calls that is strongly polynomial.

Approximate convex optimization by online game playing

by Elad Hazan - CoRR
"... Lagrangian relaxation and approximate optimization algorithms have received much attention in the last two decades. Typically, the running time of these methods to obtain a ε approximate solution is proportional to 1 ε2. Recently, Bienstock and Iyengar, following Nesterov, gave an algorithm for frac ..."
Abstract - Cited by 6 (2 self) - Add to MetaCart
Lagrangian relaxation and approximate optimization algorithms have received much attention in the last two decades. Typically, the running time of these methods to obtain a ε approximate solution is proportional to 1 ε2. Recently, Bienstock and Iyengar, following Nesterov, gave an algorithm for fractional packing linear programs which runs in 1 ε iterations. The latter algorithm requires to solve a convex quadratic program every iteration- an optimization subroutine which dominates the theoretical running time. We give an algorithm for convex programs with strictly convex constraints which runs in time proportional to 1 ε. The algorithm does not require to solve any quadratic program, but uses gradient steps and elementary operations only. Problems which have strictly convex constraints include maximum entropy frequency estimation, portfolio optimization with loss risk constraints, and various computational problems in signal processing. As a side product, we also obtain a simpler version of Bienstock and Iyengar’s result for general linear programming, with similar running time. We derive these algorithms using a new framework for deriving convex optimization algorithms from online game playing algorithms, which may be of independent interest. 1

An Approximation Algorithm for the General Mixed Packing and Covering

by Florian Diedrich, Klaus Jansen - Problem, in "ESCAPE
"... Abstract. We present a price-directive decomposition algorithm to compute an approximate solution of the mixed packing and covering problem; it either finds x ∈ B such that f(x) ≤ c(1 + ɛ)a and g(x) ≥ (1 − ɛ)b/c or correctly decides that {x ∈ B|f(x) ≤ a, g(x) ≥ b} = ∅. Heref,g are vectors of M ..."
Abstract - Cited by 3 (1 self) - Add to MetaCart
Abstract. We present a price-directive decomposition algorithm to compute an approximate solution of the mixed packing and covering problem; it either finds x ∈ B such that f(x) ≤ c(1 + ɛ)a and g(x) ≥ (1 − ɛ)b/c or correctly decides that {x ∈ B|f(x) ≤ a, g(x) ≥ b} = ∅. Heref,g are vectors of M ≥ 2 convex and concave functions, respectively, which are nonnegative on the convex compact set ∅ ̸ = B ⊆ R N; B can be queried by a feasibility oracle or block solver, a, b ∈ R M ++ and c is the block solver’s approximation ratio. The algorithm needs only O(M(ln M + ɛ −2 ln ɛ −1)) iterations, a runtime bound independent from c and the input data. Our algorithm is a generalization of [16] and also approximately solves the fractional packing and covering problem where f,g are linear and B is a polytope; there, a width-independent runtime bound is obtained. 1
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University