Results 1  10
of
85
A new approach to the minimum cut problem
 Journal of the ACM
, 1996
"... Abstract. This paper presents a new approach to finding minimum cuts in undirected graphs. The fundamental principle is simple: the edges in a graph’s minimum cut form an extremely small fraction of the graph’s edges. Using this idea, we give a randomized, strongly polynomial algorithm that finds th ..."
Abstract

Cited by 126 (9 self)
 Add to MetaCart
Abstract. This paper presents a new approach to finding minimum cuts in undirected graphs. The fundamental principle is simple: the edges in a graph’s minimum cut form an extremely small fraction of the graph’s edges. Using this idea, we give a randomized, strongly polynomial algorithm that finds the minimum cut in an arbitrarily weighted undirected graph with high probability. The algorithm runs in O(n 2 log 3 n) time, a significant improvement over the previous Õ(mn) time bounds based on maximum flows. It is simple and intuitive and uses no complex data structures. Our algorithm can be parallelized to run in �� � with n 2 processors; this gives the first proof that the minimum cut problem can be solved in ���. The algorithm does more than find a single minimum cut; it finds all of them. With minor modifications, our algorithm solves two other problems of interest. Our algorithm finds all cuts with value within a multiplicative factor of � of the minimum cut’s in expected Õ(n 2 � ) time, or in �� � with n 2 � processors. The problem of finding a minimum multiway cut of a graph into r pieces is solved in expected Õ(n 2(r�1) ) time, or in �� � with n 2(r�1) processors. The “trace ” of the algorithm’s execution on these two problems forms a new compact data structure for representing all small cuts and all multiway cuts in a graph. This data structure can be efficiently transformed into the
Predicting protein complex membership using probabilistic network reliability
 Genome Res
, 2004
"... data ..."
(Show Context)
Detecting a Network Failure
 Proc. 41st Annual IEEE Symposium on Foundations of Computer Science
, 2000
"... Abstract. Measuring the properties of a large, unstructured network can be difficult: One may not have full knowledge of the network topology, and detailed global measurements may be infeasible. A valuable approach to such problems is to take measurements from selected locations within the network a ..."
Abstract

Cited by 45 (1 self)
 Add to MetaCart
Abstract. Measuring the properties of a large, unstructured network can be difficult: One may not have full knowledge of the network topology, and detailed global measurements may be infeasible. A valuable approach to such problems is to take measurements from selected locations within the network and then aggregate them to infer largescale properties. One sees this notion applied in settings that range from Internet topology discovery tools to remote software agents that estimate the download times of popular web pages. Some of the most basic questions about this type of approach, however, are largely unresolved at an analytical level. How reliable are the results? How much does the choice of measurement locations affect the aggregate information one infers about the network? We describe algorithms that yield provable guarantees for a particular problem of this type: detecting a network failure. Suppose we want to detect events of the following form in an nnode network: An adversary destroys up to k nodes or edges, after which two subsets of the nodes, each of size at least εn, are disconnected from one another. We call such an event an (ε,k)partition. One method for detecting such events would be to place “agents ” at a set D of nodes, and record a fault whenever two of them become separated from each other. To be a good detection set, D should become disconnected whenever there is an (ε,k)partition; in this way, it “witnesses ” all such events. We show that every graph has a detection set of size polynomial in k and ε −1,and independent of the size of the graph itself. Moreover, random sampling provides an effective way to construct such a set. Our analysis establishes a connection between graph separators and the notion of VCdimension, using techniques based on matchings and disjoint paths.
Inapproximability of the Tutte polynomial
, 2008
"... The Tutte polynomial of a graph G is a twovariable polynomial T(G; x, y) that encodes many interesting properties of the graph. We study the complexity of the following problem, for rationals x and y: take as input a graph G, and output a value which is a good approximation to T(G; x, y). Jaeger, V ..."
Abstract

Cited by 29 (8 self)
 Add to MetaCart
(Show Context)
The Tutte polynomial of a graph G is a twovariable polynomial T(G; x, y) that encodes many interesting properties of the graph. We study the complexity of the following problem, for rationals x and y: take as input a graph G, and output a value which is a good approximation to T(G; x, y). Jaeger, Vertigan and Welsh have completely mapped the complexity of exactly computing the Tutte polynomial. They have shown that this is #Phard, except along the hyperbola (x − 1)(y − 1) = 1 and at four special points. We are interested in determining for which points (x, y) there is a fully polynomial randomised approximation scheme (FPRAS) for T(G; x, y). Under the assumption RP = NP, we prove that there is no FPRAS at (x, y) if (x, y) is is in one of the halfplanes x < −1 or y < −1 (excluding the easytocompute cases mentioned above). Two exceptions to this result are the halfline x < −1, y = 1 (which is still open) and the portion of the hyperbola (x − 1)(y − 1) = 2 corresponding to y < −1 which we show
Approximate counting by dynamic programming
 Proceedings of the 35th ACM Symposium on Theory of Computing
, 2003
"... Abstract We give efficient algorithms to sample uniformly, and count approximately, solutions to the zeroone knapsack problem. The algorithm is based on using dynamic programming to provide a deterministic relative approximation. Then "dart throwing " techniques are used to give a ..."
Abstract

Cited by 27 (3 self)
 Add to MetaCart
(Show Context)
Abstract We give efficient algorithms to sample uniformly, and count approximately, solutions to the zeroone knapsack problem. The algorithm is based on using dynamic programming to provide a deterministic relative approximation. Then &quot;dart throwing &quot; techniques are used to give arbitrary approximation ratios. We extend this approach to several related problems: the mconstraint zeroone knapsack, the general integer knapsack (including its mconstraint version) and contingency tables with constantly many rows. We also indicate how further improvements can be obtained using randomized rounding.
Designing overlay multicast networks for streaming
 In Proceedings of ACM Symposium on Parallel Algorithms and Architectures
, 2003
"... In this paper we present a polynomial time approximation algorithm for designing a multicast overlay network. The algorithm finds a solution that satisfies capacity and reliability constraints to within a constant factor of optimal, and cost to within a logarithmic factor. The class of networks that ..."
Abstract

Cited by 25 (6 self)
 Add to MetaCart
(Show Context)
In this paper we present a polynomial time approximation algorithm for designing a multicast overlay network. The algorithm finds a solution that satisfies capacity and reliability constraints to within a constant factor of optimal, and cost to within a logarithmic factor. The class of networks that our algorithm applies to includes the one used by Akamai Technologies to deliver live media streams over the Internet. In particular, we analyze networks consisting of three stages of nodes. The nodes in the first stage are the sources where live streams originate. A source forwards each of its streams to one or more nodes in the second stage, which are called reflectors. A reflector can split an incoming stream into multiple identical outgoing streams, which are then sent on to nodes in the third and final stage, which are called the sinks. As the packets in a stream travel from one stage to the next, some of them may be lost. The job of a sink is to combine the packets from multiple instances of the same stream (by reordering packets and discarding duplicates) to form a single instance of the stream with minimal loss. We assume that the loss rate between any pair of nodes in the network is known, and that losses between different pairs are independent, but discuss extensions in which some losses may be correlated.
A General Framework for Graph Sparsification
, 2011
"... We present a general framework for constructing cut sparsifiers in undirected graphs — weighted subgraphs for which every cut has the same weight as the original graph, up to a multiplicative factor of (1 ± ǫ). Using this framework, we simplify, unify and improve upon previous sparsification results ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
We present a general framework for constructing cut sparsifiers in undirected graphs — weighted subgraphs for which every cut has the same weight as the original graph, up to a multiplicative factor of (1 ± ǫ). Using this framework, we simplify, unify and improve upon previous sparsification results. As simple instantiations of this framework, we show that sparsifiers can be constructed by sampling edges according to their strength (a result of Benczúr and Karger), effective resistance (a result of Spielman and Srivastava), edge connectivity, or by sampling random spanning trees. Sampling according to edge connectivity is the most aggressive method, and the most challenging to analyze. Our proof that this method produces sparsifiers resolves an open question of Benczúr and Karger. While the above results are interesting from a combinatorial standpoint, we also prove new algorithmic results. In particular, we develop techniques that give the first (optimal) O(m)time sparsification algorithm for unweighted graphs. Our algorithm has a running time of O(m) + Õ(n/ǫ²) for weighted graphs, which is also linear unless the input graph is very sparse itself. In both cases, this improves upon the previous best running times (due to Benczúr and Karger) of O(m log² n) (for the unweighted case) and O(m log³ n) (for the weighted case) respectively. Our algorithm constructs sparsifiers that contain O(n log n/ǫ²) edges in expectation; the only known construction of sparsifiers with fewer edges is by a substantially slower algorithm running in O(n 3 m/ǫ 2) time. A key ingredient of our proofs is a natural generalization of Karger’s bound on the number of small cuts in an undirected graph. Given the numerous applications of Karger’s bound, we suspect that our generalization will also be of independent interest.
The Resilience of WDM Networks to Probabilistic Geographical Failures
"... Abstract—Telecommunications networks, and in particular optical WDM networks, are vulnerable to largescale failures of their physical infrastructure, resulting from physical attacks (such as an Electromagnetic Pulse attack) or natural disasters (such as solar flares, earthquakes, and floods). Such ..."
Abstract

Cited by 16 (7 self)
 Add to MetaCart
(Show Context)
Abstract—Telecommunications networks, and in particular optical WDM networks, are vulnerable to largescale failures of their physical infrastructure, resulting from physical attacks (such as an Electromagnetic Pulse attack) or natural disasters (such as solar flares, earthquakes, and floods). Such events happen at specific geographical locations and disrupt specific parts of the network but their effects are not deterministic. Therefore, we provide a unified framework to model the network vulnerability when the event has a probabilistic nature, defined by an arbitrary probability density function. Our framework captures scenarios with a number of simultaneous attacks, in which network components consist of several dependent subcomponents, and in which either a 1+1 or a 1:1 protection plan is in place. We use computational geometric tools to provide efficient algorithms to identify vulnerable points within the network under various metrics. Then, we obtain numerical results for specific backbone networks, thereby demonstrating the applicability of our algorithms to realworld scenarios. Our novel approach allows for identifying locations which require additional protection efforts (e.g., equipment shielding). Overall, the paper demonstrates that using computational geometric techniques can significantly contribute to our understanding of network resilience. Index Terms—Network survivability, geographic networks, network protection, computational geometry, optical networks. I.
From Balls and Bins to Points and Vertices
 Algorithmic Operations Research (AlgOR
"... All intext references underlined in blue are linked to publications on ResearchGate, letting you access and read them immediately. ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
(Show Context)
All intext references underlined in blue are linked to publications on ResearchGate, letting you access and read them immediately.