Results 1  10
of
150
PseudoRandom Generation from OneWay Functions
 PROC. 20TH STOC
, 1988
"... Pseudorandom generators are fundamental to many theoretical and applied aspects of computing. We show howto construct a pseudorandom generator from any oneway function. Since it is easy to construct a oneway function from a pseudorandom generator, this result shows that there is a pseudorandom gene ..."
Abstract

Cited by 861 (18 self)
 Add to MetaCart
Pseudorandom generators are fundamental to many theoretical and applied aspects of computing. We show howto construct a pseudorandom generator from any oneway function. Since it is easy to construct a oneway function from a pseudorandom generator, this result shows that there is a pseudorandom generator iff there is a oneway function.
Algorithmic information theory
 IBM JOURNAL OF RESEARCH AND DEVELOPMENT
, 1977
"... This paper reviews algorithmic information theory, which is an attempt to apply informationtheoretic and probabilistic ideas to recursive function theory. Typical concerns in this approach are, for example, the number of bits of information required to specify an algorithm, or the probability that ..."
Abstract

Cited by 385 (18 self)
 Add to MetaCart
This paper reviews algorithmic information theory, which is an attempt to apply informationtheoretic and probabilistic ideas to recursive function theory. Typical concerns in this approach are, for example, the number of bits of information required to specify an algorithm, or the probability that a program whose bits are chosen by coin flipping produces a given output. During the past few years the definitions of algorithmic information theory have been reformulated. The basic features of the new formalism are presented here and certain results of R. M. Solovay are reported.
The method of creative telescoping
 J. Symbolic Computation
, 1991
"... An algorithm for de6nite hypergeometric summation is given. It is based, in a nonobvious way, on Gosper's algorithm for definite hypergeometric summation, and its theoretical justification relies on Bernstein's theory of holonomic systems. 1. ..."
Abstract

Cited by 201 (11 self)
 Add to MetaCart
(Show Context)
An algorithm for de6nite hypergeometric summation is given. It is based, in a nonobvious way, on Gosper's algorithm for definite hypergeometric summation, and its theoretical justification relies on Bernstein's theory of holonomic systems. 1.
The Multiflow Trace Scheduling Compiler
 Journal of Supercomputing
, 1993
"... The Multiflow compiler uses the trace scheduling algorithm to find and exploit instructionlevel parallelism beyond basic blocks. The compiler generates code for VLIW computers that issue up to 28 operations each cycle and maintain more than 50 operations in flight. At Multiflow the compiler generat ..."
Abstract

Cited by 189 (1 self)
 Add to MetaCart
(Show Context)
The Multiflow compiler uses the trace scheduling algorithm to find and exploit instructionlevel parallelism beyond basic blocks. The compiler generates code for VLIW computers that issue up to 28 operations each cycle and maintain more than 50 operations in flight. At Multiflow the compiler generated code for eight different target machine architectures and compiled over 50 million lines of FORTRAN and C applications and systems code. The requirement of finding large amounts of parallelism in ordinary programs, the trace scheduling algorithm, and the many unique features of the Multiflow hardware placed novel demands on the compiler. New techniques in instruction scheduling, register allocation, memorybank management, and intermediatecode optimizations were developed, as were refinements to reduce the overhead of trace scheduling. This paper describes the Multiflow compiler and reports on the Multiflow practice and experience with compiling for instructionlevel parallelism beyond basic blocks.
Squarified Treemaps
 In Proceedings of the Joint Eurographics and IEEE TCVG Symposium on Visualization
, 1999
"... . An extension to the treemap method for the visualization of hierarchical information, such as directory structures and organization structures, is presented. The standard treemap method often gives thin, elongated rectangles. As a result, rectangles are difficult to compare and to select. A new ..."
Abstract

Cited by 172 (2 self)
 Add to MetaCart
(Show Context)
. An extension to the treemap method for the visualization of hierarchical information, such as directory structures and organization structures, is presented. The standard treemap method often gives thin, elongated rectangles. As a result, rectangles are difficult to compare and to select. A new method is presented to generate layouts in which the rectangles approximate squares. To strenghten the visualization of the structure, shaded frames are used around groups of related nodes. 1 Introduction Hierarchical structures of information are everywhere: directory structures, organization structures, family trees, catalogues, computer programs, and so on. Small hierarchical structures are effective to locate information, but the content and organization of large structures is harder to grasp. We present a new visualization method for large hierarchical structures: Squarified Treemaps. The method is based on Treemaps, developed by Shneiderman and Johnson [9, 6]. Treemaps are effic...
Speeding Up The Computations On An Elliptic Curve Using AdditionSubtraction Chains
 THEORETICAL INFORMATICS AND APPLICATIONS
, 1990
"... We show how to compute x k using multiplications and divisions. We use this method in the context of elliptic curves for which a law exists with the property that division has the same cost as multiplication. Our best algorithm is 11.11% faster than the ordinary binary algorithm and speeds up acco ..."
Abstract

Cited by 106 (4 self)
 Add to MetaCart
We show how to compute x k using multiplications and divisions. We use this method in the context of elliptic curves for which a law exists with the property that division has the same cost as multiplication. Our best algorithm is 11.11% faster than the ordinary binary algorithm and speeds up accordingly the factorization and primality testing algorithms using elliptic curves.
A chronology of interpolation: From ancient astronomy to modern signal and image processing
 Proceedings of the IEEE
, 2002
"... This paper presents a chronological overview of the developments in interpolation theory, from the earliest times to the present date. It brings out the connections between the results obtained in different ages, thereby putting the techniques currently used in signal and image processing into histo ..."
Abstract

Cited by 105 (0 self)
 Add to MetaCart
This paper presents a chronological overview of the developments in interpolation theory, from the earliest times to the present date. It brings out the connections between the results obtained in different ages, thereby putting the techniques currently used in signal and image processing into historical perspective. A summary of the insights and recommendations that follow from relatively recent theoretical as well as experimental studies concludes the presentation. Keywords—Approximation, convolutionbased interpolation, history, image processing, polynomial interpolation, signal processing, splines. “It is an extremely useful thing to have knowledge of the true origins of memorable discoveries, especially those that have been found not by accident but by dint of meditation. It is not so much that thereby history may attribute to each man his own discoveries and others should be encouraged to earn like commendation, as that the art of making discoveries should be extended by considering noteworthy examples of it. ” 1 I.
Computing MinimumWeight Perfect Matchings
 INFORMS
, 1999
"... We make several observations on the implementation of Edmonds’ blossom algorithm for solving minimumweight perfectmatching problems and we present computational results for geometric problem instances ranging in size from 1,000 nodes up to 5,000,000 nodes. A key feature in our implementation is the ..."
Abstract

Cited by 98 (2 self)
 Add to MetaCart
We make several observations on the implementation of Edmonds’ blossom algorithm for solving minimumweight perfectmatching problems and we present computational results for geometric problem instances ranging in size from 1,000 nodes up to 5,000,000 nodes. A key feature in our implementation is the use of multiple search trees with an individual dualchange � for each tree. As a benchmark of the algorithm’s performance, solving a 100,000node geometric instance on a 200 Mhz PentiumPro computer takes approximately 3 minutes.
Parallel data mining for association rules on sharedmemory multiprocessors
 In Proc. Supercomputing’96
, 1996
"... Abstract. In this paper we present a new parallel algorithm for data mining of association rules on sharedmemory multiprocessors. We study the degree of parallelism, synchronization, and data locality issues, and present optimizations for fast frequency computation. Experiments show that a signific ..."
Abstract

Cited by 91 (20 self)
 Add to MetaCart
Abstract. In this paper we present a new parallel algorithm for data mining of association rules on sharedmemory multiprocessors. We study the degree of parallelism, synchronization, and data locality issues, and present optimizations for fast frequency computation. Experiments show that a significant improvement of performance is achieved using our proposed optimizations. We also achieved good speedup for the parallel algorithm. A lot of datamining tasks (e.g. association rules, sequential patterns) use complex pointerbased data structures (e.g. hash trees) that typically suffer from suboptimal data locality. In the multiprocessor case shared access to these data structures may also result in false sharing. For these tasks it is commonly observed that the recursive data structure is built once and accessed multiple times during each iteration. Furthermore, the access patterns after the build phase are highly ordered. In such cases locality and false sharing sensitive memory placement of these structures can enhance performance significantly. We evaluate a set of placement policies for parallel association discovery, and show that simple placement schemes can improve execution time by more than a factor of two. More complex schemes yield additional gains.
Approximating the unsatisfiability threshold of random formulas
, 1998
"... Let ï¿½ be a random Boolean formula that is an instance of 3SAT. We consider the problem of computing the least real number ï¿½ such that if the ratio of the number of clauses over the number of variables of ï¿½ strictly exceeds ï¿½, then ï¿½ is almost certainly unsatisfiable. By a wellknown and ..."
Abstract

Cited by 88 (15 self)
 Add to MetaCart
Let ï¿½ be a random Boolean formula that is an instance of 3SAT. We consider the problem of computing the least real number ï¿½ such that if the ratio of the number of clauses over the number of variables of ï¿½ strictly exceeds ï¿½, then ï¿½ is almost certainly unsatisfiable. By a wellknown and more or less straightforward argument, it can be shown that ï¿½ï¿½5.191. This upper bound was improved by Kamath et al. to 4.758 by first providing new improved bounds for the occupancy problem. There is strong experimental evidence that the value of ï¿½ is around 4.2. In this work, we define, in terms of the random formula ï¿½, a decreasing sequence of random variables such that, if the expected value of any one of them converges to zero, then ï¿½ is almost certainly unsatisfiable. By letting the expected value of the first term of the sequence converge to zero, we obtain, by simple and elementary computations, an upper bound for ï¿½ equal to 4.667. From the expected value of the second term of the sequence, we get the value 4.601ï¿½. In general, by letting the