Results 1 - 10
of
2,900
Graph-based algorithms for Boolean function manipulation
- IEEE TRANSACTIONS ON COMPUTERS
, 1986
"... In this paper we present a new data structure for representing Boolean functions and an associated set of manipulation algorithms. Functions are represented by directed, acyclic graphs in a manner similar to the representations introduced by Lee [1] and Akers [2], but with further restrictions on th ..."
Abstract
-
Cited by 3526 (46 self)
- Add to MetaCart
(Show Context)
In this paper we present a new data structure for representing Boolean functions and an associated set of manipulation algorithms. Functions are represented by directed, acyclic graphs in a manner similar to the representations introduced by Lee [1] and Akers [2], but with further restrictions on the ordering of decision variables in the graph. Although a function requires, in the worst case, a graph of size exponential in the number of arguments, many of the functions encountered in typical applications have a more reasonable representation. Our algorithms have time complexity proportional to the sizes of the graphs being operated on, and hence are quite efficient as long as the graphs do not grow too large. We present experimental results from applying these algorithms to problems in logic design verification that demonstrate the practicality of our approach.
Compositional Model Checking
, 1999
"... We describe a method for reducing the complexity of temporal logic model checking in systems composed of many parallel processes. The goal is to check properties of the components of a system and then deduce global properties from these local properties. The main difficulty with this type of approac ..."
Abstract
-
Cited by 3252 (70 self)
- Add to MetaCart
We describe a method for reducing the complexity of temporal logic model checking in systems composed of many parallel processes. The goal is to check properties of the components of a system and then deduce global properties from these local properties. The main difficulty with this type of approach is that local properties are often not preserved at the global level. We present a general framework for using additional interface processes to model the environment for a component. These interface processes are typically much simpler than the full environment of the component. By composing a component with its interface processes and then checking properties of this composition, we can guarantee that these properties will be preserved at the global level. We give two example compositional systems based on the logic CTL*.
Dynamic Logic
- Handbook of Philosophical Logic
, 1984
"... ed to be true under the valuation u iff there exists an a 2 N such that the formula x = y is true under the valuation u[x=a], where u[x=a] agrees with u everywhere except x, on which it takes the value a. This definition involves a metalogical operation that produces u[x=a] from u for all possibl ..."
Abstract
-
Cited by 1012 (7 self)
- Add to MetaCart
ed to be true under the valuation u iff there exists an a 2 N such that the formula x = y is true under the valuation u[x=a], where u[x=a] agrees with u everywhere except x, on which it takes the value a. This definition involves a metalogical operation that produces u[x=a] from u for all possible values a 2 N. This operation becomes explicit in DL in the form of the program x := ?, called a nondeterministic or wildcard assignment. This is a rather unconventional program, since it is not effective; however, it is quite useful as a descriptive tool. A more conventional way to obtain a square root of y, if it exists, would be the program x := 0 ; while x < y do x := x + 1: (1) In DL, such programs are first-class objects on a par with formulas, complete with a collection of operators for forming compound programs inductively from a basis of primitive programs. To discuss the effect of the execution of a program on the truth of a formula ', DL uses a modal construct <>', which
Parallel Numerical Linear Algebra
, 1993
"... We survey general techniques and open problems in numerical linear algebra on parallel architectures. We first discuss basic principles of parallel processing, describing the costs of basic operations on parallel machines, including general principles for constructing efficient algorithms. We illust ..."
Abstract
-
Cited by 773 (23 self)
- Add to MetaCart
We survey general techniques and open problems in numerical linear algebra on parallel architectures. We first discuss basic principles of parallel processing, describing the costs of basic operations on parallel machines, including general principles for constructing efficient algorithms. We illustrate these principles using current architectures and software systems, and by showing how one would implement matrix multiplication. Then, we present direct and iterative algorithms for solving linear systems of equations, linear least squares problems, the symmetric eigenvalue problem, the nonsymmetric eigenvalue problem, and the singular value decomposition. We consider dense, band and sparse matrices.
Fibonacci Heaps and Their Uses in Improved Network optimization algorithms
, 1987
"... In this paper we develop a new data structure for implementing heaps (priority queues). Our structure, Fibonacci heaps (abbreviated F-heaps), extends the binomial queues proposed by Vuillemin and studied further by Brown. F-heaps support arbitrary deletion from an n-item heap in qlogn) amortized tim ..."
Abstract
-
Cited by 739 (18 self)
- Add to MetaCart
In this paper we develop a new data structure for implementing heaps (priority queues). Our structure, Fibonacci heaps (abbreviated F-heaps), extends the binomial queues proposed by Vuillemin and studied further by Brown. F-heaps support arbitrary deletion from an n-item heap in qlogn) amortized time and all other standard heap operations in o ( 1) amortized time. Using F-heaps we are able to obtain improved running times for several network optimization algorithms. In particular, we obtain the following worst-case bounds, where n is the number of vertices and m the number of edges in the problem graph: ( 1) O(n log n + m) for the single-source shortest path problem with nonnegative edge lengths, improved from O(m logfmh+2)n); (2) O(n*log n + nm) for the all-pairs shortest path problem, improved from O(nm lo&,,,+2,n); (3) O(n*logn + nm) for the assignment problem (weighted bipartite matching), improved from O(nm log0dn+2)n); (4) O(mj3(m, n)) for the minimum spanning tree problem, improved from O(mloglo&,,.+2,n), where j3(m, n) = min {i 1 log % 5 m/n). Note that B(m, n) 5 log*n if m 2 n. Of these results, the improved bound for minimum spanning trees is the most striking, although all the
Learnability and the Vapnik-Chervonenkis dimension
, 1989
"... Valiant’s learnability model is extended to learning classes of concepts defined by regions in Euclidean space E”. The methods in this paper lead to a unified treatment of some of Valiant’s results, along with previous results on distribution-free convergence of certain pattern recognition algorith ..."
Abstract
-
Cited by 727 (22 self)
- Add to MetaCart
Valiant’s learnability model is extended to learning classes of concepts defined by regions in Euclidean space E”. The methods in this paper lead to a unified treatment of some of Valiant’s results, along with previous results on distribution-free convergence of certain pattern recognition algorithms. It is shown that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned. Using this parameter, the complexity and closure properties of learnable classes are analyzed, and the necessary and sufftcient conditions are provided for feasible learnability.
Automatic Subspace Clustering of High Dimensional Data
- Data Mining and Knowledge Discovery
, 2005
"... Data mining applications place special requirements on clustering algorithms including: the ability to find clusters embedded in subspaces of high dimensional data, scalability, end-user comprehensibility of the results, non-presumption of any canonical data distribution, and insensitivity to the or ..."
Abstract
-
Cited by 724 (12 self)
- Add to MetaCart
(Show Context)
Data mining applications place special requirements on clustering algorithms including: the ability to find clusters embedded in subspaces of high dimensional data, scalability, end-user comprehensibility of the results, non-presumption of any canonical data distribution, and insensitivity to the order of input records. We present CLIQUE, a clustering algorithm that satisfies each of these requirements. CLIQUE identifies dense clusters in subspaces of maximum dimensionality. It generates cluster descriptions in the form of DNF expressions that are minimized for ease of comprehension. It produces identical results irrespective of the order in which input records are presented and does not presume any specific mathematical form for data distribution. Through experiments, we show that CLIQUE efficiently finds accurate clusters in large high dimensional datasets.
Trace Scheduling: A Technique for Global Microcode Compaction
- IEEE TRANSACTIONS ON COMPUTERS
, 1981
"... Microcode compaction is the conversion of sequential microcode into efficient parallel (horizontal) microcode. Local com-paction techniques are those whose domain is basic blocks of code, while global methods attack code with a general flow control. Compilation of high-level microcode languages int ..."
Abstract
-
Cited by 683 (5 self)
- Add to MetaCart
Microcode compaction is the conversion of sequential microcode into efficient parallel (horizontal) microcode. Local com-paction techniques are those whose domain is basic blocks of code, while global methods attack code with a general flow control. Compilation of high-level microcode languages into efficient horizontal microcode and good hand coding probably both require effective global compaction techniques. In this paper "trace scheduling" is developed as a solution to the global compaction problem. Trace scheduling works on traces (or paths) through microprograms. Compacting is thus done with a broad overview of the program. Important operations are given priority, no matter what their source block was. This is in sharp contrast with earlier methods, which compact one block at a time and then attempt iterative improvement. It is argued that those methods suffer from the lack of an overview and make many undesirable compactions, often preventing desirable ones. Loops are handled using the reducible property of most flow graphs. The loop handling technique permits the operations to move around loops, as well as into loops where appropriate. Trace scheduling is developed on a simplified and straightforward model of microinstructions. Guides to the extension to more general models are given.
How bad is selfish routing?
- JOURNAL OF THE ACM
, 2002
"... We consider the problem of routing traffic to optimize the performance of a congested network. We are given a network, a rate of traffic between each pair of nodes, and a latency function for each edge specifying the time needed to traverse the edge given its congestion; the objective is to route t ..."
Abstract
-
Cited by 657 (27 self)
- Add to MetaCart
We consider the problem of routing traffic to optimize the performance of a congested network. We are given a network, a rate of traffic between each pair of nodes, and a latency function for each edge specifying the time needed to traverse the edge given its congestion; the objective is to route traffic such that the sum of all travel times—the total latency—is minimized. In many settings, it may be expensive or impossible to regulate network traffic so as to implement an optimal assignment of routes. In the absence of regulation by some central authority, we assume that each network user routes its traffic on the minimum-latency path available to it, given the network congestion caused by the other users. In general such a “selfishly motivated ” assignment of traffic to paths will not minimize the total latency; hence, this lack of regulation carries the cost of decreased network performance. In this article, we quantify the degradation in network performance due to unregulated traffic. We prove that if the latency of each edge is a linear function of its congestion, then the total latency of the routes chosen by selfish network users is at most 4/3 times the minimum possible total latency (subject to the condition that all traffic must be routed). We also consider the more general setting in which edge latency functions are assumed only to be continuous and nondecreasing in the edge congestion. Here, the total