Results 1 - 10
of
560
Fibonacci Heaps and Their Uses in Improved Network optimization algorithms
, 1987
"... In this paper we develop a new data structure for implementing heaps (priority queues). Our structure, Fibonacci heaps (abbreviated F-heaps), extends the binomial queues proposed by Vuillemin and studied further by Brown. F-heaps support arbitrary deletion from an n-item heap in qlogn) amortized tim ..."
Abstract
-
Cited by 739 (18 self)
- Add to MetaCart
In this paper we develop a new data structure for implementing heaps (priority queues). Our structure, Fibonacci heaps (abbreviated F-heaps), extends the binomial queues proposed by Vuillemin and studied further by Brown. F-heaps support arbitrary deletion from an n-item heap in qlogn) amortized time and all other standard heap operations in o ( 1) amortized time. Using F-heaps we are able to obtain improved running times for several network optimization algorithms. In particular, we obtain the following worst-case bounds, where n is the number of vertices and m the number of edges in the problem graph: ( 1) O(n log n + m) for the single-source shortest path problem with nonnegative edge lengths, improved from O(m logfmh+2)n); (2) O(n*log n + nm) for the all-pairs shortest path problem, improved from O(nm lo&,,,+2,n); (3) O(n*logn + nm) for the assignment problem (weighted bipartite matching), improved from O(nm log0dn+2)n); (4) O(mj3(m, n)) for the minimum spanning tree problem, improved from O(mloglo&,,.+2,n), where j3(m, n) = min {i 1 log % 5 m/n). Note that B(m, n) 5 log*n if m 2 n. Of these results, the improved bound for minimum spanning trees is the most striking, although all the
A new approach to the maximum flow problem
- JOURNAL OF THE ACM
, 1988
"... All previously known efficient maximum-flow algorithms work by finding augmenting paths, either one path at a time (as in the original Ford and Fulkerson algorithm) or all shortest-length augmenting paths at once (using the layered network approach of Dinic). An alternative method based on the pre ..."
Abstract
-
Cited by 672 (33 self)
- Add to MetaCart
(Show Context)
All previously known efficient maximum-flow algorithms work by finding augmenting paths, either one path at a time (as in the original Ford and Fulkerson algorithm) or all shortest-length augmenting paths at once (using the layered network approach of Dinic). An alternative method based on the preflow concept of Karzanov is introduced. A preflow is like a flow, except that the total amount flowing into a vertex is allowed to exceed the total amount flowing out. The method maintains a preflow in the original network and pushes local flow excess toward the sink along what are estimated to be shortest paths. The algorithm and its analysis are simple and intuitive, yet the algorithm runs as fast as any other known method on dense. graphs, achieving an O(n³) time bound on an n-vertex graph. By incorporating the dynamic tree data structure of Sleator and Tarjan, we obtain a version of the algorithm running in O(nm log(n²/m)) time on an n-vertex, m-edge graph. This is as fast as any known method for any graph density and faster on graphs of moderate density. The algorithm also admits efticient distributed and parallel implementations. A parallel implementation running in O(n²log n) time using n processors and O(m) space is obtained. This time bound matches that of the Shiloach-Vishkin algorithm, which also uses n processors but requires O(n²) space.
Retiming Synchronous Circuitry
- ALGORITHMICA
, 1991
"... This paper describes a circuit transformation called retiming in which registers are added at some points in a circuit and removed from others in such a way that the functional behavior of the circuit as a whole is preserved. We show that retiming can be used to transform a given synchronous circui ..."
Abstract
-
Cited by 376 (3 self)
- Add to MetaCart
This paper describes a circuit transformation called retiming in which registers are added at some points in a circuit and removed from others in such a way that the functional behavior of the circuit as a whole is preserved. We show that retiming can be used to transform a given synchronous circuit into a more efficient circuit under a variety of different cost criteria. We model a circuit as a graph in which the vertex set Visa collection of combinational logic elements and the edge set E is the set of interconnections, each of which may pass through zero or more registers. We give an 0(|V| |E| lg|V|) algorithm for determining an equivalent retimed circuit with the smallest possible clock period. We show that the problem of determining an equivalent retimed circuit with minimum state (total number of registers) is polynomial-time solvable. This result yields a polynomial-time optimal solution to the problem of pipelining combinational circuitry with minimum register cost. We also give a characterization of optimal retiming based on an efficiently solvable mixed-integer linear-programming problem.
Principles and methods of Testing Finite State Machines -- a survey
- PROCEEDINGS OF IEEE
, 1996
"... With advanced computer technology, systems are getting larger to fulfill more complicated tasks, however, they are also becoming less reliable. Consequently, testing is an indispensable part of system design and implementation; yet it has proved to be a formidable task for complex systems. This moti ..."
Abstract
-
Cited by 345 (16 self)
- Add to MetaCart
With advanced computer technology, systems are getting larger to fulfill more complicated tasks, however, they are also becoming less reliable. Consequently, testing is an indispensable part of system design and implementation; yet it has proved to be a formidable task for complex systems. This motivates the study of testing finite state machines to ensure the correct functioning of systems and to discover aspects of their behavior. A finite state machine contains a finite number of states and produces outputs on state transitions after receiving inputs. Finite state machines are widely used to model systems in diverse areas, including sequential circuits, certain types of programs, and, more recently, communication protocols. In a testing problem we have a machine about which we lack some information; we would like to deduce this information by providing a sequence of inputs to the machine and observing the outputs produced. Because of its practical importance and theoretical interest, the problem of testing finite state machines has been studied in different areas and at various times. The earliest published literature on this topic dates back to the 50’s. Activities in the 60’s and early 70’s were motivated mainly by automata theory and sequential circuit testing. The area seemed to have mostly died down until a few years ago when the testing problem was resurrected and is now being studied anew due to its applications to conformance testing of communication protocols. While some old problems which had been open for decades were resolved recently, new concepts and more intriguing problems from new applications emerge. We review the fundamental problems in testing finite state machines and techniques for solving these problems, tracing progress in the area from its inception to the present and the state of the art. In addition, we discuss extensions of finite state machines and some other topics related to testing.
Efficient Identification of Web Communities
- IN SIXTH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING
, 2000
"... We define a community on the web as a set of sites that have more links (in either direction) to members of the community than to non-members. Members of such a community can be eciently identified in a maximum flow / minimum cut framework, where the source is composed of known members, and the sink ..."
Abstract
-
Cited by 293 (13 self)
- Add to MetaCart
We define a community on the web as a set of sites that have more links (in either direction) to members of the community than to non-members. Members of such a community can be eciently identified in a maximum flow / minimum cut framework, where the source is composed of known members, and the sink consists of well-known non-members. A focused crawler that crawls to a fixed depth can approximate community membership by augmenting the graph induced by the crawl with links to a virtual sink node. The effectiveness of the approximation algorithm is demonstrated with several crawl results that identify hubs, authorities, web rings, and other link topologies that are useful but not easily categorized. Applications of our approach include focused crawlers and search engines, automatic population of portal categories, and improved filtering.
Simultaneous Truth and Performance Level Estimation (STAPLE): An Algorithm for the Validation of Image Segmentation
- IEEE TRANS. MED. IMAG
, 2004
"... Characterizing the performance of image segmentation approaches has been a persistent challenge. Performance analysis is important since segmentation algorithms often have limited accuracy and precision. Interactive drawing of the desired segmentation by human raters has often been the only acceptab ..."
Abstract
-
Cited by 250 (21 self)
- Add to MetaCart
(Show Context)
Characterizing the performance of image segmentation approaches has been a persistent challenge. Performance analysis is important since segmentation algorithms often have limited accuracy and precision. Interactive drawing of the desired segmentation by human raters has often been the only acceptable approach, and yet suffers from intrarater and inter-rater variability. Automated algorithms have been sought in order to remove the variability introduced by raters, but such algorithms must be assessed to ensure they are suitable for the task. The performance of raters...
The NP-completeness column: an ongoing guide
- JOURNAL OF ALGORITHMS
, 1987
"... This is the nineteenth edition of a (usually) quarterly column that covers new developments in the theory of NP-completeness. The presentation is modeled on that used by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NP-Completeness," W. H. Freem ..."
Abstract
-
Cited by 239 (0 self)
- Add to MetaCart
(Show Context)
This is the nineteenth edition of a (usually) quarterly column that covers new developments in the theory of NP-completeness. The presentation is modeled on that used by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NP-Completeness," W. H. Freeman & Co., New York, 1979 (hereinafter referred to as "[G&J]"; previous columns will be referred to by their dates). A background equivalent to that provided by [G&J] is assumed, and, when appropriate, cross-references will be given to that book and the list of problems (NP-complete and harder) presented there. Readers who have results they would like mentioned (NP-hardness, PSPACE-hardness, polynomial-time-solvability, etc.) or open problems they would like publicized, should
On implementing the push-relabel method for the maximum flow problem
, 1994
"... We study efficient implementations of the push-relabel method for the maximum flow problem. The resulting codes are faster than the previous codes, and much faster on some problem families. The speedup is due to the combination of heuristics used in our implementation. We also exhibit a family of p ..."
Abstract
-
Cited by 209 (10 self)
- Add to MetaCart
(Show Context)
We study efficient implementations of the push-relabel method for the maximum flow problem. The resulting codes are faster than the previous codes, and much faster on some problem families. The speedup is due to the combination of heuristics used in our implementation. We also exhibit a family of problems for which all known methods seem to have almost quadratic time growth rate.
Edmonds polytopes and a hierarchy of combinatorial problems
, 2006
"... Let S be a set of linear inequalities that determine a bounded polyhedron P. The closure of S is the smallest set of inequalities that contains S and is closed under two operations: (i) taking linear combinations of inequalities, (ii) replacing an inequality Σaj xj ≤ a0, where a1,a2,...,an are integ ..."
Abstract
-
Cited by 170 (0 self)
- Add to MetaCart
Let S be a set of linear inequalities that determine a bounded polyhedron P. The closure of S is the smallest set of inequalities that contains S and is closed under two operations: (i) taking linear combinations of inequalities, (ii) replacing an inequality Σaj xj ≤ a0, where a1,a2,...,an are integers, by the inequality Σaj xj ≤ a with a ≥[a0]. Obviously, if integers x1,x2,...,xn satisfy all the inequalities in S, then they satisfy also all inequalities in the closure of S. Conversely, let Σcj xj ≤ c0 hold for all choices of integers x1,x2,...,xn, that satisfy all the inequalities in S. Then we prove that Σcj xj ≤ c0 belongs to the closure of S. To each integer linear programming problem, we assign a nonnegative integer, called its rank. (The rank is the minimum number of iterations of the operation (ii) that are required in order to eliminate the integrality constraint.) We prove that there is no upper bound on the rank of problems arising from the search for largest independent sets in graphs.