Results 1 
7 of
7
Efficient Accelerated Coordinate Descent Methods and Faster Algorithms for Solving Linear Systems
"... In this paper we show how to accelerate randomized coordinate descent methods and achieve faster convergence rates without paying periteration costs in asymptotic running time. In particular, we show how to generalize and efficiently implement a method proposed by Nesterov, giving faster asymptotic ..."
Abstract

Cited by 23 (6 self)
 Add to MetaCart
(Show Context)
In this paper we show how to accelerate randomized coordinate descent methods and achieve faster convergence rates without paying periteration costs in asymptotic running time. In particular, we show how to generalize and efficiently implement a method proposed by Nesterov, giving faster asymptotic running times for various algorithms that use standard coordinate descent as a black box. In addition to providing a proof of convergence for this new general method, we show that it is numerically stable, efficiently implementable, and in certain regimes, asymptotically optimal. To highlight the computational power of this algorithm, we show how it can used to create faster linear system solvers in several regimes: • We show how this method achieves a faster asymptotic runtime than conjugate gradient for solving a broad class of symmetric positive definite systems of equations. • We improve the best known asymptotic convergence guarantees for Kaczmarz methods, a popular technique for image reconstruction and solving overdetermined systems of equations, by accelerating a randomized algorithm of Strohmer and Vershynin. • We achieve the best known running time for solving Symmetric Diagonally Dominant (SDD) system of equations in the unitcost RAM model, obtaining an O(m log3/2 n log logn log ( logn)) asymptotic running time by accelerating a recent solver by Kelner et al. Beyond the independent interest of these solvers, we believe they highlight the versatility of the approach of this paper and we hope that they will open the door for further algorithmic improvements in the future. 1 ar
An almostlineartime algorithm for approximate max flow in undirected graphs, and its multicommodity generalizations
"... In this paper we present an almost linear time algorithm for solving approximate maximum flow in undirected graphs. In particular, given a graph with m edges we show how to produce a 1−ε approximate maximum flow in time O(m 1+o(1) · ε −2). Furthermore, we present this algorithm as part of a general ..."
Abstract

Cited by 14 (7 self)
 Add to MetaCart
In this paper we present an almost linear time algorithm for solving approximate maximum flow in undirected graphs. In particular, given a graph with m edges we show how to produce a 1−ε approximate maximum flow in time O(m 1+o(1) · ε −2). Furthermore, we present this algorithm as part of a general framework that also allows us to achieve a running time of O(m 1+o(1) ε −2 k 2) for the maximum concurrent kcommodity flow problem, the first such algorithm with an almost linear dependence on m. We also note that independently Jonah Sherman has produced an almost linear time algorithm for maximum flow and we thank him for coordinating submissions.
Navigating Central Path with Electrical Flows: From Flows to Matchings, and Back
 FOCS
, 2013
"... We present an Õ(m ..."
(Show Context)
A Novel, Simple Interpretation of Nesterov’s Accelerated Method as a Combination of Gradient and Mirror Descent. ArXiv eprints, abs/1407.1537
, 2014
"... Firstorder methods play a central role in largescale convex optimization. Even though many variations exist, each suited to a particular problem form, almost all such methods fundamentally rely on two types of algorithmic steps and two corresponding types of analysis: gradientdescent steps, which ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Firstorder methods play a central role in largescale convex optimization. Even though many variations exist, each suited to a particular problem form, almost all such methods fundamentally rely on two types of algorithmic steps and two corresponding types of analysis: gradientdescent steps, which yield primal progress, and mirrordescent steps, which yield dual progress. In this paper, we observe that the performances of these two types of step are complementary, so that faster algorithms can be designed by coupling the two steps and combining their analyses. In particular, we show how to obtain a conceptually simple interpretation of Nesterov’s accelerated gradient method [Nes83, Nes04, Nes05], a cornerstone algorithm in convex optimization. Nesterov’s method is the optimal firstorder method for the class of smooth convex optimization problems. However, to the best of our knowledge, the proof of the fast convergence of Nesterov’s method has not found a clear interpretation and is still regarded by many as crucially relying on an “algebraic trick”[Jud13]. We apply our novel insights to express Nesterov’s algorithm as a natural coupling of gradient descent and mirror descent and to write its proof of convergence as a simple combination of the convergence analyses of the two underlying steps. We believe that the complementary view of gradient descent and mirror descent proposed in this paper will prove very useful in the design of firstorder methods as it allows us to design fast algorithms in a conceptually easier way. For instance, our view greatly facilitates the adaptation of nontrivial variants of Nesterov’s method to specific scenarios, such as packing and covering problems [AO14b, AO14a]. ar X iv
Smaller Steps for Faster Algorithms: A New Approach to Solving Linear Systems
, 2013
"... In this thesis we study iterative algorithms with simple sublinear time update steps, and we show how a mix of of data structures, randomization, and results from numerical analysis allow us to achieve faster algorithms for solving linear systems in a variety of different regimes. First we present ..."
Abstract
 Add to MetaCart
In this thesis we study iterative algorithms with simple sublinear time update steps, and we show how a mix of of data structures, randomization, and results from numerical analysis allow us to achieve faster algorithms for solving linear systems in a variety of different regimes. First we present a simple combinatorial algorithm for solving symmetric diagonally dominant (SDD) systems of equations that improves upon the best previously known running time for solving such system in the standard unitcost RAM model. Then we provide a general method for convex optimization that improves this simple algorithm's running time as special case. Our results include the following: * We achieve the best known running time of 0 (m log312 1 log log n log(e 1 log n)) for solving Symmetric Diagonally Dominant (SDD) system of equations in the standard unitcost RAM model. e We obtain a faster asymptotic running time than conjugate gradient for solving a broad class of symmetric positive definite systems of equations.