Results 1  10
of
10
Finding Small Sparse Cuts by Random Walk
"... Abstract. We study the problem of finding a small sparse cut in an undirected graph. Given an undirected graph G = (V,E) and a parameter k ≤ E, the small sparsest cut problem is to find a set S ⊆ V with minimum conductance among all sets with volume at most k. Using ideas developed in local graph ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We study the problem of finding a small sparse cut in an undirected graph. Given an undirected graph G = (V,E) and a parameter k ≤ E, the small sparsest cut problem is to find a set S ⊆ V with minimum conductance among all sets with volume at most k. Using ideas developed in local graph partitioning algorithms, we obtain the following bicriteria approximation algorithms for the small sparsest cut problem: – If there is a set U ⊆ V with conductance φ and vol(U) ≤ k, then there isapolynomial timealgorithm tofindaset S with conductance O ( √ φ/ǫ) and vol(S) ≤ k 1+ǫ for any ǫ> 1/k. – If there is a set U ⊆ V with conductance φ and vol(U) ≤ k, then there isapolynomial timealgorithm tofindaset S with conductance O ( √ φlogk/ǫ) and vol(S) ≤ (1+ǫ)k for any ǫ> 2logk/k. These algorithms can be implemented locally using truncated random walk, with running time almost linear to k. 1
A Novel, Simple Interpretation of Nesterov’s Accelerated Method as a Combination of Gradient and Mirror Descent. ArXiv eprints, abs/1407.1537
, 2014
"... Firstorder methods play a central role in largescale convex optimization. Even though many variations exist, each suited to a particular problem form, almost all such methods fundamentally rely on two types of algorithmic steps and two corresponding types of analysis: gradientdescent steps, which ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Firstorder methods play a central role in largescale convex optimization. Even though many variations exist, each suited to a particular problem form, almost all such methods fundamentally rely on two types of algorithmic steps and two corresponding types of analysis: gradientdescent steps, which yield primal progress, and mirrordescent steps, which yield dual progress. In this paper, we observe that the performances of these two types of step are complementary, so that faster algorithms can be designed by coupling the two steps and combining their analyses. In particular, we show how to obtain a conceptually simple interpretation of Nesterov’s accelerated gradient method [Nes83, Nes04, Nes05], a cornerstone algorithm in convex optimization. Nesterov’s method is the optimal firstorder method for the class of smooth convex optimization problems. However, to the best of our knowledge, the proof of the fast convergence of Nesterov’s method has not found a clear interpretation and is still regarded by many as crucially relying on an “algebraic trick”[Jud13]. We apply our novel insights to express Nesterov’s algorithm as a natural coupling of gradient descent and mirror descent and to write its proof of convergence as a simple combination of the convergence analyses of the two underlying steps. We believe that the complementary view of gradient descent and mirror descent proposed in this paper will prove very useful in the design of firstorder methods as it allows us to design fast algorithms in a conceptually easier way. For instance, our view greatly facilitates the adaptation of nontrivial variants of Nesterov’s method to specific scenarios, such as packing and covering problems [AO14b, AO14a]. ar X iv
New Results in the Theory of Approximation  Fast Graph Algorithms and Inapproximability
, 2013
"... For several basic optimization problems, it is NPhard to find an exact solution. As a result, understanding the best possible tradeoff between the running time of an algorithm and its approximation guarantee, is a fundamental question in theoretical computer science, and the central goal of the th ..."
Abstract
 Add to MetaCart
For several basic optimization problems, it is NPhard to find an exact solution. As a result, understanding the best possible tradeoff between the running time of an algorithm and its approximation guarantee, is a fundamental question in theoretical computer science, and the central goal of the theory of approximation. There are two aspects to the theory of approximation: (1) efficient approximation algorithms that establish tradeoffs between approximation guarantee and running time, and (2) inapproximability results that give evidence against them. In this thesis, we contribute to both facets of the theory of approximation. In the first part of this thesis, we present the first nearlineartime algorithm for Balanced Separator given a graph, partition its vertices into two roughly equal parts without cutting too many edges that achieves the best approximation guarantee possible for algorithms in its class. This is a classic graph partitioning problem and has deep connections to several areas of both theory and practice, such as metric embeddings, Markov chains, clustering, etc.
MIT Math
"... Given a subset A of vertices of an undirected graph G, the cutimprovement problem asks us to find a subset S that is similar to A but has smaller conductance. An elegant algorithm for this problem has been given by Andersen and Lang [AL08] and requires solving a small number of singlecommodity max ..."
Abstract
 Add to MetaCart
Given a subset A of vertices of an undirected graph G, the cutimprovement problem asks us to find a subset S that is similar to A but has smaller conductance. An elegant algorithm for this problem has been given by Andersen and Lang [AL08] and requires solving a small number of singlecommodity maximum flow computations over the whole graph G. In this paper, we introduce LocalImprove, the first cutimprovement algorithm that is local, i.e., that runs in time dependent on the size of the input set A rather than on the size of the entire graph. Moreover, LocalImprove achieves this local behavior while closely matching the same theoretical guarantee as the global algorithm of Andersen and Lang. The main application of LocalImprove is to the design of better localgraphpartitioning algorithms. All previously known local algorithms for graph partitioning are randomwalk based and can only guarantee an output conductance of Õ( φopt) when the target set has conductance φopt ∈ [0, 1]. Very recently, Zhu, Lattanzi and Mirrokni [ZLM13] improved this to O(φopt/ Conn) where the internal connectivity parameter Conn ∈ [0, 1] is defined as the reciprocal of the mixing time of the random walk over the graph induced by the target set. This regime is of high practical interest in learning applications as it corresponds to the case when the target set is a wellconnected groundtruth cluster. In this work, we show how to use LocalImprove to obtain a constant approximation O(φopt) as long as Conn/φopt = Ω(1). This yields the first flowbased algorithm for local graph partitioning. Moreover, its performance strictly outperforms the ones based on random walks and surprisingly matches that of the best known global algorithm, which is SDPbased, in this parameter regime [MMV12]. Finally, our results show that spectral methods are not the only viable approach to the construction of local graph partitioning algorithm and open door to the study of algorithms with even better approximation and locality guarantees. 1
Sublinear Columnwise Actions of the Matrix Exponential on Social Networks
, 2014
"... ar ..."
(Show Context)
Local graph clustering beyond . . .
, 2013
"... Motivated by applications of largescale graph clustering, we study randomwalkbased local algorithms whose running times depend only on the size of the output cluster, rather than the entire graph. All previously known such algorithms guarantee an output conductance of ..."
Abstract
 Add to MetaCart
Motivated by applications of largescale graph clustering, we study randomwalkbased local algorithms whose running times depend only on the size of the output cluster, rather than the entire graph. All previously known such algorithms guarantee an output conductance of