Results 1  10
of
10
Efficient Accelerated Coordinate Descent Methods and Faster Algorithms for Solving Linear Systems
"... In this paper we show how to accelerate randomized coordinate descent methods and achieve faster convergence rates without paying periteration costs in asymptotic running time. In particular, we show how to generalize and efficiently implement a method proposed by Nesterov, giving faster asymptotic ..."
Abstract

Cited by 23 (6 self)
 Add to MetaCart
(Show Context)
In this paper we show how to accelerate randomized coordinate descent methods and achieve faster convergence rates without paying periteration costs in asymptotic running time. In particular, we show how to generalize and efficiently implement a method proposed by Nesterov, giving faster asymptotic running times for various algorithms that use standard coordinate descent as a black box. In addition to providing a proof of convergence for this new general method, we show that it is numerically stable, efficiently implementable, and in certain regimes, asymptotically optimal. To highlight the computational power of this algorithm, we show how it can used to create faster linear system solvers in several regimes: • We show how this method achieves a faster asymptotic runtime than conjugate gradient for solving a broad class of symmetric positive definite systems of equations. • We improve the best known asymptotic convergence guarantees for Kaczmarz methods, a popular technique for image reconstruction and solving overdetermined systems of equations, by accelerating a randomized algorithm of Strohmer and Vershynin. • We achieve the best known running time for solving Symmetric Diagonally Dominant (SDD) system of equations in the unitcost RAM model, obtaining an O(m log3/2 n log logn log ( logn)) asymptotic running time by accelerating a recent solver by Kelner et al. Beyond the independent interest of these solvers, we believe they highlight the versatility of the approach of this paper and we hope that they will open the door for further algorithmic improvements in the future. 1 ar
An almostlineartime algorithm for approximate max flow in undirected graphs, and its multicommodity generalizations
"... In this paper we present an almost linear time algorithm for solving approximate maximum flow in undirected graphs. In particular, given a graph with m edges we show how to produce a 1−ε approximate maximum flow in time O(m 1+o(1) · ε −2). Furthermore, we present this algorithm as part of a general ..."
Abstract

Cited by 14 (7 self)
 Add to MetaCart
In this paper we present an almost linear time algorithm for solving approximate maximum flow in undirected graphs. In particular, given a graph with m edges we show how to produce a 1−ε approximate maximum flow in time O(m 1+o(1) · ε −2). Furthermore, we present this algorithm as part of a general framework that also allows us to achieve a running time of O(m 1+o(1) ε −2 k 2) for the maximum concurrent kcommodity flow problem, the first such algorithm with an almost linear dependence on m. We also note that independently Jonah Sherman has produced an almost linear time algorithm for maximum flow and we thank him for coordinating submissions.
Navigating Central Path with Electrical Flows: From Flows to Matchings, and Back
 FOCS
, 2013
"... We present an Õ(m ..."
(Show Context)
A Novel, Simple Interpretation of Nesterov’s Accelerated Method as a Combination of Gradient and Mirror Descent. ArXiv eprints, abs/1407.1537
, 2014
"... Firstorder methods play a central role in largescale convex optimization. Even though many variations exist, each suited to a particular problem form, almost all such methods fundamentally rely on two types of algorithmic steps and two corresponding types of analysis: gradientdescent steps, which ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Firstorder methods play a central role in largescale convex optimization. Even though many variations exist, each suited to a particular problem form, almost all such methods fundamentally rely on two types of algorithmic steps and two corresponding types of analysis: gradientdescent steps, which yield primal progress, and mirrordescent steps, which yield dual progress. In this paper, we observe that the performances of these two types of step are complementary, so that faster algorithms can be designed by coupling the two steps and combining their analyses. In particular, we show how to obtain a conceptually simple interpretation of Nesterov’s accelerated gradient method [Nes83, Nes04, Nes05], a cornerstone algorithm in convex optimization. Nesterov’s method is the optimal firstorder method for the class of smooth convex optimization problems. However, to the best of our knowledge, the proof of the fast convergence of Nesterov’s method has not found a clear interpretation and is still regarded by many as crucially relying on an “algebraic trick”[Jud13]. We apply our novel insights to express Nesterov’s algorithm as a natural coupling of gradient descent and mirror descent and to write its proof of convergence as a simple combination of the convergence analyses of the two underlying steps. We believe that the complementary view of gradient descent and mirror descent proposed in this paper will prove very useful in the design of firstorder methods as it allows us to design fast algorithms in a conceptually easier way. For instance, our view greatly facilitates the adaptation of nontrivial variants of Nesterov’s method to specific scenarios, such as packing and covering problems [AO14b, AO14a]. ar X iv
Following the Path of Least resistence: An Õ(m √n) Algorithm for the Minimum Cost Flow Problem
, 2013
"... In this paper we present an Õ(m√n log2 U) time algorithm for solving the maximum flow problem on directed graphs with m edges, n vertices, and capacity ratio U. This improves upon the previous fastest running time of O(mmin (n2/3,m1/2) log (n2/m) logU) achieved over 15 years ago by Goldberg and Rao ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
In this paper we present an Õ(m√n log2 U) time algorithm for solving the maximum flow problem on directed graphs with m edges, n vertices, and capacity ratio U. This improves upon the previous fastest running time of O(mmin (n2/3,m1/2) log (n2/m) logU) achieved over 15 years ago by Goldberg and Rao [8] and improves upon the previous best running times for solving dense directed unit capacity graphs of O(min{m3/2,mn2/3}) achieved by Even and Tarjan [6] over 35 years ago and a running time of O(m10/7) achieved recently by Madry [21]. We achieve these results through the development and application of a new general interior point method that we believe is of independent interest. The number of iterations required by this algorithm is better than that predicted by analyzing the best selfconcordant barrier of the feasible region. By applying this method to the linear programming formulations of maximum flow, minimum cost flow, and lossy generalized minimum cost flow and applying analysis by Daitch and Spielman[5] we achieve running time of Õ(m√n log2(U/)) for these problems as well. Furthermore, our algorithm is parallelizable and using a recent nearly linear time work polylogarithmic depth Laplacian system solver of Spielman and Peng [25] we achieve a Õ(√n log2(U/)) depth algorithm and Õ(m√n log2(U/)) work algorithm for solving these problems.
Quantum algorithms for approximating the effective resistances in electrical networks
, 2013
"... The theory of electrical network has many applications in algorithm design and analysis. It is an important task to compute the basic quantities about electrical networks, such as electrical flows and effective resistances, as quickly as possible. Classically, to compute these quantities, one basica ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
The theory of electrical network has many applications in algorithm design and analysis. It is an important task to compute the basic quantities about electrical networks, such as electrical flows and effective resistances, as quickly as possible. Classically, to compute these quantities, one basically need to solve a Laplacian linear system, and the best known algorithms take Õ(m) time, where m is the number of edges. In this paper, we present two quantum algorithms for approximating the effective resistance between any two vertices in an electrical network. Both of them have time complexity polynomial in logn, d, c, 1/φ and 1/ε, where n is the number of vertices, d is the maximum degree of the vertices, c is the ratio of the largest to the smallest edge resistance, φ is the expansion of the network, and ε is the relative error. In particular, when d and c are small and φ is large, our algorithms run very fast. In contrast, it is unknown whether classical algorithms can solve this case very fast. Furthermore, we prove that the polynomial dependence on the inverse expansion (i.e. 1/φ) is necessary. As a consequence, our algorithms cannot be significantly improved. Finally, as a byproduct, our second algorithm also produces a quantum state approximately proportional to the electrical flow between any two vertices, which might be of independent interest. Our algorithms are based on using quantum tools to analyze the algebraic properties of graphrelated matrices. While one of them relies on inverting the Laplacian matrix, the other relies on projecting onto the kernel of the weighted incidence matrix. It is hopeful that more quantum algorithms can be developed in similar way. 1
Smaller Steps for Faster Algorithms: A New Approach to Solving Linear Systems
, 2013
"... In this thesis we study iterative algorithms with simple sublinear time update steps, and we show how a mix of of data structures, randomization, and results from numerical analysis allow us to achieve faster algorithms for solving linear systems in a variety of different regimes. First we present ..."
Abstract
 Add to MetaCart
In this thesis we study iterative algorithms with simple sublinear time update steps, and we show how a mix of of data structures, randomization, and results from numerical analysis allow us to achieve faster algorithms for solving linear systems in a variety of different regimes. First we present a simple combinatorial algorithm for solving symmetric diagonally dominant (SDD) systems of equations that improves upon the best previously known running time for solving such system in the standard unitcost RAM model. Then we provide a general method for convex optimization that improves this simple algorithm's running time as special case. Our results include the following: * We achieve the best known running time of 0 (m log312 1 log log n log(e 1 log n)) for solving Symmetric Diagonally Dominant (SDD) system of equations in the standard unitcost RAM model. e We obtain a faster asymptotic running time than conjugate gradient for solving a broad class of symmetric positive definite systems of equations.