Results 1 - 10
of
872
Simulation of Simplicity: A Technique to Cope with Degenerate Cases in Geometric Algorithms
- ACM TRANS. GRAPH
, 1990
"... This paper describes a general-purpose programming technique, called the Simulation of Simplicity, which can be used to cope with degenerate input data for geometric algorithms. It relieves the programmer from the task to provide a consistent treatment for every single special case that can occur. T ..."
Abstract
-
Cited by 305 (19 self)
- Add to MetaCart
(Show Context)
This paper describes a general-purpose programming technique, called the Simulation of Simplicity, which can be used to cope with degenerate input data for geometric algorithms. It relieves the programmer from the task to provide a consistent treatment for every single special case that can occur. The programs that use the technique tend to be considerably smaller and more robust than those that do not use it. We believe that this technique will become a standard tool in writing geometric software.
A Fast Fourier Transform Compiler
, 1999
"... FFTW library for computing the discrete Fourier transform (DFT) has gained a wide acceptance in both academia and industry, because it provides excellent performance on a variety of machines (even competitive with or faster than equivalent libraries supplied by vendors). In FFTW, most of the perform ..."
Abstract
-
Cited by 199 (5 self)
- Add to MetaCart
FFTW library for computing the discrete Fourier transform (DFT) has gained a wide acceptance in both academia and industry, because it provides excellent performance on a variety of machines (even competitive with or faster than equivalent libraries supplied by vendors). In FFTW, most of the performance-critical code was generated automatically by a special-purpose compiler, called genfft, that outputs C code. Written in Objective Caml, genfft can produce DFT programs for any input length, and it can specialize the DFT program for the common case where the input data are real instead of complex. Unexpectedly, genfft “discovered” algorithms that were previously unknown, and it was able to reduce the arithmetic complexity of some other existing algorithms. This paper describes the internals of this special-purpose compiler in some detail, and it argues that a specialized compiler is a valuable tool.
TinyECC: A Configurable Library for Elliptic Curve Cryptography in Wireless Sensor Networks
"... Public Key Cryptography (PKC) has been the enabling technology underlying many security services and protocols in traditional networks such as the Internet. In the context of wireless sensor networks, elliptic curve cryptography (ECC), one of the most efficient types of PKC, is being investigated to ..."
Abstract
-
Cited by 146 (1 self)
- Add to MetaCart
(Show Context)
Public Key Cryptography (PKC) has been the enabling technology underlying many security services and protocols in traditional networks such as the Internet. In the context of wireless sensor networks, elliptic curve cryptography (ECC), one of the most efficient types of PKC, is being investigated to provide PKC support in sensor network applications so that the existing PKC-based solutions can be exploited. This paper presents the design, implementation, and evaluation of TinyECC, a configurable library for ECC operations in wireless sensor networks. The primary objective of TinyECC is to provide a ready-to-use, publicly available software package for ECC-based PKC operations that can be flexibly configured and integrated into sensor network applications. TinyECC provides a number of optimization switches, which can turn specific optimizations on or off based on developers ’ needs. Different combinations of the optimizations have different execution time and resource consumptions, giving developers great flexibility in integrating TinyECC into sensor network applications. This paper also reports the experimental evaluation of TinyECC on several common sensor platforms, including MICAz, Tmote Sky, and Imote2. The evaluation results show the impacts of individual optimizations on the execution time and resource consumptions, and give the most computationally efficient and the most storage efficient configuration of TinyECC.
Agent-based computational models and generative social science
- Complexity
, 1999
"... This article argues that the agent-based computational model permits a distinctive approach to social science for which the term “generative ” is suitable. In defending this terminology, features distinguishing the approach from both “inductive ” and “deductive ” science are given. Then, the followi ..."
Abstract
-
Cited by 122 (0 self)
- Add to MetaCart
This article argues that the agent-based computational model permits a distinctive approach to social science for which the term “generative ” is suitable. In defending this terminology, features distinguishing the approach from both “inductive ” and “deductive ” science are given. Then, the following specific contributions to social science are discussed: The agent-based computational model is a new tool for empirical research. It offers a natural environment for the study of connectionist phenomena in social science. Agent-based modeling provides a powerful way to address certain enduring—and especially interdisciplinary—questions. It allows one to subject certain core theories—such as neoclassical microeconomics—to important types of stress (e.g., the effect of evolving preferences). It permits one to study how rules of individual behavior give rise—or “map up”—to macroscopic regularities and organizations. In turn, one can employ laboratory behavioral research findings to select among competing agent-based (“bottom up”) models. The agent-based approach may well have the important effect of decoupling individual rationality from macroscopic equilibrium and of separating decision science from social science more generally. Agent-based modeling offers powerful new forms of hybrid theoretical-computational work; these are particularly relevant to the study of non-equilibrium systems. The agentbased approach invites the interpretation of society as a distributed computational device, and in turn the interpretation of social dynamics as a type of computation. This interpretation raises important foundational issues in social science—some related to intractability, and some to undecidability proper. Finally, since “emergence” figures prominently in this literature, I take up the connection between agent-based modeling and classical emergentism, criticizing the latter and arguing that the two are incompatible. � 1999 John Wiley &
External-Memory Computational Geometry
"... In this paper, we give new techniques for designing efficient algorithms for computational geometry problems that are too large to be solved in internal memory, and we use these techniques to develop optimal and practical algorithms for a number of important largescale problems. We discuss our algo ..."
Abstract
-
Cited by 120 (14 self)
- Add to MetaCart
In this paper, we give new techniques for designing efficient algorithms for computational geometry problems that are too large to be solved in internal memory, and we use these techniques to develop optimal and practical algorithms for a number of important largescale problems. We discuss our algorithms primarily in the context of single processor/single disk machines, a domain in which they are not only the first known optimal results but also of tremendous practical value. Our methods also produce the first known optimal algorithms for a wide range of two-level and hierarchical multilevel memory models, including parallel models. The algorithms are optimal both in terms of I/O cost and internal computation.
A five-year study of file-system metadata
- In Proceedings of the 5th USENIX Conference on File and Storage Technologies. USENIX Association
, 2007
"... For five years, we collected annual snapshots of file-system metadata from over 60,000 Windows PC file systems in a large corporation. In this article, we use these snapshots to study temporal changes in file size, file age, file-type frequency, directory size, namespace structure, file-system popul ..."
Abstract
-
Cited by 114 (6 self)
- Add to MetaCart
For five years, we collected annual snapshots of file-system metadata from over 60,000 Windows PC file systems in a large corporation. In this article, we use these snapshots to study temporal changes in file size, file age, file-type frequency, directory size, namespace structure, file-system population, storage capacity and consumption, and degree of file modification. We present a generative model that explains the namespace structure and the distribution of directory sizes. We find significant temporal trends relating to the popularity of certain file types, the origin of file content, the way the namespace is used, and the degree of variation among file systems, as well as more pedestrian changes in size and capacities. We give examples of consequent lessons for designers of file systems and related software.
Average-Case Analysis of Algorithms and Data Structures
, 1990
"... This report is a contributed chapter to the Handbook of Theoretical Computer Science (North-Holland, 1990). Its aim is to describe the main mathematical methods and applications in the average-case analysis of algorithms and data structures. It comprises two parts: First, we present basic combinato ..."
Abstract
-
Cited by 106 (8 self)
- Add to MetaCart
This report is a contributed chapter to the Handbook of Theoretical Computer Science (North-Holland, 1990). Its aim is to describe the main mathematical methods and applications in the average-case analysis of algorithms and data structures. It comprises two parts: First, we present basic combinatorial enumerations based on symbolic methods and asymptotic methods with emphasis on complex analysis techniques (such as singularity analysis, saddle point, Mellin transforms). Next, we show how to apply these general methods to the analysis of sorting, searching, tree data structures, hashing, and dynamic algorithms. The emphasis is on algorithms for which exact "analytic models" can be derived.
Random Mapping Statistics
- IN ADVANCES IN CRYPTOLOGY
, 1990
"... Random mappings from a finite set into itself are either a heuristic or an exact model for a variety of applications in random number generation, computational number theory, cryptography, and the analysis of algorithms at large. This paper introduces a general framework in which the analysis of ..."
Abstract
-
Cited by 105 (6 self)
- Add to MetaCart
Random mappings from a finite set into itself are either a heuristic or an exact model for a variety of applications in random number generation, computational number theory, cryptography, and the analysis of algorithms at large. This paper introduces a general framework in which the analysis of about twenty characteristic parameters of random mappings is carried out: These parameters are studied systematically through the use of generating functions and singularity analysis. In particular, an open problem of Knuth is solved, namely that of finding the expected diameter of a random mapping. The same approach is applicable to a larger class of discrete combinatorial models and possibilities of automated analysis using symbolic manipulation systems ("computer algebra") are also briefly discussed.
Good Parameters And Implementations For Combined Multiple Recursive Random Number Generators
, 1998
"... this paper is to provide good CMRGs of different sizes, selected via the spectral test up to 32 (or 24) dimensions, and a faster implementation than in L'Ecuyer (1996) using floating-point arithmetic. Why do we need different parameter sets? Firstly, different types of implementations require d ..."
Abstract
-
Cited by 97 (23 self)
- Add to MetaCart
this paper is to provide good CMRGs of different sizes, selected via the spectral test up to 32 (or 24) dimensions, and a faster implementation than in L'Ecuyer (1996) using floating-point arithmetic. Why do we need different parameter sets? Firstly, different types of implementations require different constraints on the modulus and multipliers. For example, a floating-point implementation with 53 bits of precision allows moduli of more than 31 bits and this can be exploited to increase the period length for free. Secondly, as 64-bit computers get more widespread, there is demand for generators implemented in 64-bit integer arithmetic. Tables of good parameters for such generators must be made available. Thirdly, RNGs are somewhat like cars: a single model and single size for the entire world is not the most satisfactory solution. Some people want a fast and relatively small RNG, while others prefer a bigger and more robust one, with longer period and good equidistribution properties in larger dimensions. Naively, one could think that an RNG with period length near 2