Results 1  10
of
23
Adaptive game playing using multiplicative weights
 GAMES AND ECONOMIC BEHAVIOR
, 1999
"... We present a simple algorithm for playing a repeated game. We show that a player using this algorithm suffers average loss that is guaranteed to come close to the minimum loss achievable by any fixed strategy. Our bounds are nonasymptotic and hold for any opponent. The algorithm, which uses the mult ..."
Abstract

Cited by 163 (19 self)
 Add to MetaCart
We present a simple algorithm for playing a repeated game. We show that a player using this algorithm suffers average loss that is guaranteed to come close to the minimum loss achievable by any fixed strategy. Our bounds are nonasymptotic and hold for any opponent. The algorithm, which uses the multiplicativeweight methods of Littlestone and Warmuth, is analyzed using the Kullback–Liebler divergence. This analysis yields a new, simple proof of the min–max theorem, as well as a provable method of approximately solving a game. A variant of our gameplaying algorithm is proved to be optimal in a very strong sense.
Potential Function Methods for Approximately Solving Linear Programming Problems: Theory and Practice
, 2001
"... After several decades of sustained research and testing, linear programming has evolved into a remarkably reliable, accurate and useful tool for handling industrial optimization problems. Yet, large problems arising from several concrete applications routinely defeat the very best linear programming ..."
Abstract

Cited by 155 (4 self)
 Add to MetaCart
After several decades of sustained research and testing, linear programming has evolved into a remarkably reliable, accurate and useful tool for handling industrial optimization problems. Yet, large problems arising from several concrete applications routinely defeat the very best linear programming codes, running on the fastest computing hardware. Moreover, this is a trend that may well continue and intensify, as problem sizes escalate and the need for fast algorithms becomes more stringent. Traditionally, the focus in optimization algorithms, and in particular, in algorithms for linear programming, has been to solve problems "to optimality." In concrete implementations, this has always meant the solution ofproblems to some finite accuracy (for example, eight digits). An alternative approach would be to explicitly, and rigorously, trade o# accuracy for speed. One motivating factor is that in many practical applications, quickly obtaining a partially accurate solution is much preferable to obtaining a very accurate solution very slowly. A secondary (and independent) consideration is that the input data in many practical applications has limited accuracy to begin with. During the last ten years, a new body ofresearch has emerged, which seeks to develop provably good approximation algorithms for classes of linear programming problems. This work both has roots in fundamental areas of mathematical programming and is also framed in the context ofthe modern theory ofalgorithms. The result ofthis work has been a family ofalgorithms with solid theoretical foundations and with growing experimental success. In this manuscript we will study these algorithms, starting with some ofthe very earliest examples, and through the latest theoretical and computational developments.
The multiplicative weights update method: a meta algorithm and applications
, 2005
"... Algorithms in varied fields use the idea of maintaining a distribution over a certain set and use the multiplicative update rule to iteratively change these weights. Their analysis are usually very similar and rely on an exponential potential function. We present a simple meta algorithm that unifies ..."
Abstract

Cited by 147 (13 self)
 Add to MetaCart
(Show Context)
Algorithms in varied fields use the idea of maintaining a distribution over a certain set and use the multiplicative update rule to iteratively change these weights. Their analysis are usually very similar and rely on an exponential potential function. We present a simple meta algorithm that unifies these disparate algorithms and drives them as simple instantiations of the meta algorithm. 1
Sequential and parallel algorithms for mixed packing and covering
 IN 42ND ANNUAL IEEE SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE
, 2001
"... We describe sequential and parallel algorithms that approximately solve linear programs with no negative coefficients (a.k.a. mixed packing and covering problems). For explicitly given problems, our fastest sequential algorithm returns a solution satisfying all constraints within a ¦ ¯ factor in Ç ..."
Abstract

Cited by 67 (6 self)
 Add to MetaCart
We describe sequential and parallel algorithms that approximately solve linear programs with no negative coefficients (a.k.a. mixed packing and covering problems). For explicitly given problems, our fastest sequential algorithm returns a solution satisfying all constraints within a ¦ ¯ factor in Ç Ñ � ÐÓ � Ñ � ¯ time, where Ñ is the number of constraints and � is the maximum number of constraints any variable appears in. Our parallel algorithm runs in time polylogarithmic in the input size times ¯ � and uses a total number of operations comparable to the sequential algorithm. The main contribution is that the algorithms solve mixed packing and covering problems (in contrast to pure packing or pure covering problems, which have only “� ” or only “� ” inequalities, but not both) and run in time independent of the socalled width of the problem.
Faster Approximation Schemes for Fractional Multicommodity Flow Problems
"... We present fully polynomial approximation schemes for concurrent multicommodity flow problems that run in time of minimum possible dependency on the number of commodities k. We showthat by modifying the algorithms by Garg & K"onemann [7] and Fleischer [5] we can reduce their running ..."
Abstract

Cited by 29 (0 self)
 Add to MetaCart
(Show Context)
We present fully polynomial approximation schemes for concurrent multicommodity flow problems that run in time of minimum possible dependency on the number of commodities k. We showthat by modifying the algorithms by Garg & K&quot;onemann [7] and Fleischer [5] we can reduce their running time on a graph with n vertices and m edges from ~O(&quot;2(m2 + km)) to ~O(&quot;2m2) foran implicit representation of the output, or ~ O(&quot;2(m2 + kn)) for an explicit representation, where ~ O(f) denotes a quantity that is O(f logO(1) m). The implicit representation consists of a set oftrees rooted at sources (there can be more than one tree per source), and with sinks as their leaves, together with flow values for the flow directed from the source to the sinks in a particular tree.Given this implicit representation, the approximate value of the concurrent flow is known, but if we want the explicit flow per commodity per edge, we would have to combine all these trees together,and the cost of doing so may be prohibitive. In case we want to calculate explicitly the solution flow, we modify our schemes so that they run in time polylogarithmic in nk (n is the numberof nodes in the network). This is within a polylogarithmic factor of the trivial lower bound of time \Omega (nk) needed to explicitly write down a multicommodity flow of k commodities in a network of n nodes. Therefore our schemes are within a polylogarithmic factor of the minimum possible dependency of the running time on the number of commodities k.
Efficient Algorithms Using The Multiplicative Weights Update Method
, 2006
"... Abstract Algorithms based on convex optimization, especially linear and semidefinite programming, are ubiquitous in Computer Science. While there are polynomial time algorithms known to solve such problems, quite often the running time of these algorithms is very high. Designing simpler and more eff ..."
Abstract

Cited by 28 (1 self)
 Add to MetaCart
(Show Context)
Abstract Algorithms based on convex optimization, especially linear and semidefinite programming, are ubiquitous in Computer Science. While there are polynomial time algorithms known to solve such problems, quite often the running time of these algorithms is very high. Designing simpler and more efficient algorithms is important for practical impact. In this thesis, we explore applications of the Multiplicative Weights method in the design of efficient algorithms for various optimization problems. This method, which was repeatedly discovered in quite diverse fields, is an algorithmic technique which maintains a distribution on a certain set of interest, and updates it iteratively by multiplying the probability mass of elements by suitably chosen factors based on feedback obtained by running another algorithm on the distribution. We present a single metaalgorithm which unifies all known applications of this method in a common framework. Next, we generalize the method to the setting of symmetric matrices rather than real numbers. We derive the following applications of the resulting Matrix Multiplicative Weights algorithm: 1. The first truly general, combinatorial, primaldual method for designing efficient algorithms for semidefinite programming. Using these techniques, we obtain significantly faster algorithms for obtaining O(plog n) approximations to various graph partitioning problems, such as Sparsest Cut, Balanced Separator in both directed and undirected weighted graphs, and constraint satisfaction problems such as Min UnCut and Min 2CNF Deletion. 2. An ~O(n3) time derandomization of the AlonRoichman construction of expanders using Cayley graphs. The algorithm yields a set of O(log n) elements which generates an expanding Cayley graph in any group of n elements. 3. An ~O(n3) time deterministic O(log n) approximation algorithm for the quantum hypergraph covering problem. 4. An alternative proof of a result of Aaronson that the flfatshattering dimension of quantum states on n qubits is O ( nfl2).
Asymptotic Analysis of the Flow Deviation Method for the Maximum Concurrent Flow Problem
 Mathematical Programming
, 2000
"... We analyze the asymptotic behavior of the Flow Deviation Method, rst presented in 1971 by Fratta, Gerla and Kleinrock, and show that when applied to packing linear programs such as the maximum concurrent ow problem, it yields a fully polynomialtime approximation scheme. 1 ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
We analyze the asymptotic behavior of the Flow Deviation Method, rst presented in 1971 by Fratta, Gerla and Kleinrock, and show that when applied to packing linear programs such as the maximum concurrent ow problem, it yields a fully polynomialtime approximation scheme. 1
Online EndtoEnd Congestion Control
 IEEE Foundations of Computer Science
, 2002
"... Congestion control in the current Internet is accomplished mainly by TCP/IP. To understand the macroscopic network behavior that results from TCP/IP and similar endtoend protocols, one main analytic technique is to show that the the protocol maximizes some global objective function of the network t ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
(Show Context)
Congestion control in the current Internet is accomplished mainly by TCP/IP. To understand the macroscopic network behavior that results from TCP/IP and similar endtoend protocols, one main analytic technique is to show that the the protocol maximizes some global objective function of the network traffic. Here we analyze a particular endtoend, MIMD (multiplicativeincrease, multiplicativedecrease) protocol. We show that if all users of the network use the protocol, and all connections last for at least logarithmically many rounds, then the total weighted throughput (value of all packets received) is near the maximum possible. Our analysis includes roundtriptimes, and (in contrast to most previous analyses) gives explicit convergence rates, allows connections to start and stop, and allows capacities to change. 1. Congestion control and optimization Congestion control in the current Internet is accomplished mainly by TCP/IP — 90 % of Internet traffic is TCPbased [41]. Meanwhile the design and analysis of TCP and other endtoend congestioncontrol protocols are only partially understood and are becoming the subject of increasing attention [25, 28]. One main analytic technique is to interpret the protocol as solving some underlying combinatorial optimization problem on the network — to show that the protocol causes the traffic distribution, over time, to optimize some global objective function
A New Approach to Computing Maximum Flows using Electrical Flows
 Proceedings of the 45th symposium on Theory of Computing  STOC ’13
, 2013
"... We give an algorithm which computes a (1−ɛ)approximately maximum st−flow in an undirected uncapacitated graph in time O ( 1 m ɛ F ·m log2 n) where F is the flow value. By trading this off against the KargerLevine algorithm for undirected graphs which takes Õ(m + nF) time, we obtain a running time ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
(Show Context)
We give an algorithm which computes a (1−ɛ)approximately maximum st−flow in an undirected uncapacitated graph in time O ( 1 m ɛ F ·m log2 n) where F is the flow value. By trading this off against the KargerLevine algorithm for undirected graphs which takes Õ(m + nF) time, we obtain a running time of Õ(mn1/3 /ɛ 2/3) for uncapacitated graphs, improving the previous best dependence on ɛ by a factor of O(1/ɛ 3). Like the algorithm of Christiano, Kelner, Madry, Spielman and Teng, our algorithm reduces the problem to electrical flow computations which are carried out in linear time using fast Laplacian solvers. However, in contrast to previous work, our algorithm does not reweight the edges of the graph in any way, and instead uses local (i.e., non s − t) electrical flows to reroute the flow on congested edges. The algorithm is simple and may be viewed as trying to find a point at the intersection of two convex sets (the affine subspace of stflows of value F and the ℓ ∞ ball) by an accelerated version of the method of alternating projections due to Nesterov. By combining this with Ford and Fulkerson’s augmenting paths algorithm, we obtain an exact algorithm with running time Õ(m5/4F 1/4) for uncapacitated undirected graphs, improving the previous best running time of Õ(m+min(nF, m3/2)). We give a related algorithm with the same running time for approximate minimum cut, based on minimizing a smoothed version of the ℓ1 norm inside the cut space of the input graph. We show that the minimizer of this norm is related to an approximate blocking flow and use this to give an algorithm for computing a length k approximately blocking flow in time Õ(m√k).