Results 1  10
of
84,843
Lambda Calculi and Linear Speedups
 THE ESSENCE OF COMPUTATION: COMPLEXITY, ANALYSIS, TRANSFORMATION, NUMBER 2566 IN LECTURE NOTES IN COMPUTER SCIENCE
, 2002
"... The equational theories at the core of most functional programming are variations on the standard lambda calculus. The bestknown of these is the callbyvalue lambda calculus whose core is the valuebeta computation rule (#x.M)V M [ V / x ]whereV is restricted to be a value rather than an arb ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
The equational theories at the core of most functional programming are variations on the standard lambda calculus. The bestknown of these is the callbyvalue lambda calculus whose core is the valuebeta computation rule (#x.M)V M [ V / x ]whereV is restricted to be a value rather than an arbitrary term. This paper
Lambda calculi and linear speedups
 The essence of computation: complexity, analysis, transformation, number 2566 in Lecture Notes in Computer Science
, 2002
"... www.cs.chalmers.se Abstract. The equational theories at the core of most functional programming are variations on the standard lambda calculus. The bestknown of these is the callbyvalue lambda calculus whose core is the valuebeta computation rule (λx.M)V → M [V/x] where V is restricted to be a ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
www.cs.chalmers.se Abstract. The equational theories at the core of most functional programming are variations on the standard lambda calculus. The bestknown of these is the callbyvalue lambda calculus whose core is the valuebeta computation rule (λx.M)V → M [V/x] where V is restricted to be a value rather than an arbitrary term. This paper investigates the transformational power of this core theory of functional programming. The main result is that the equational theory of the callbyvalue lambda calculus cannot speed up (or slow down) programs by more than a constant factor. The corresponding result also holds for callbyneed but we show that it does not hold for callbyname: there are programs for which a single beta reduction can change the program’s asymptotic complexity. 1
Scheduling Parallel Jobs with Linear Speedup
"... We consider a scheduling problem where a set of jobs is apriori distributed over parallel machines. The processing time of any job is dependent on the usage of a scarce renewable resource, e.g. personnel. An amount of k units of that resource can be allocated to the jobs at any time, and the more ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
of that resource is allocated to a job, the smaller its processing time. The dependence of processing times on the amount of resources is linear for any job. The objective is to find a resource allocation and a schedule that minimizes the makespan. Utilizing an integer quadratic programming relaxation, we show how
Achieving linear speedup in parallel LRU cache simulation
 In Proceedings of the 12th GI/ITG Conference on Measuring, Modelling, and Evaluation of Computer and Communication Systems
, 2004
"... Previous work on simulation of LRU caching led to the development of parallel algorithms that are efficient for small numbers of processors. However, these algorithms exhibit a sublinear speedup, where the efficiency seriously decreases with a higher number of processors. In order to achieve linear ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Previous work on simulation of LRU caching led to the development of parallel algorithms that are efficient for small numbers of processors. However, these algorithms exhibit a sublinear speedup, where the efficiency seriously decreases with a higher number of processors. In order to achieve
Linear SpeedUp, Information Vicinity, and FiniteState Machines
 In IFIP proceedings. NorthHolland
, 1994
"... Connections are shown between two properties of a machine model: linear speedup and polynomial vicinity . In the context of the author's Block Move (BM) model, these relate to: "How long does it take to simulate a finite transducer S on a given input z?" This question is related to t ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Connections are shown between two properties of a machine model: linear speedup and polynomial vicinity . In the context of the author's Block Move (BM) model, these relate to: "How long does it take to simulate a finite transducer S on a given input z?" This question is related
1 CFOREST: Parallel ShortestPath Planning with Super Linear Speedup
"... Abstract—CFOREST is a parallelization framework for singlequery samplingbased shortestpath planning algorithms. Multiple searchtrees are grown in parallel (e.g., 1 per CPU). Each time a better path is found, it is exchanged between trees so that all trees can benefit from its data. Specifically, ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
that CFOREST achieves significant super linear speedup in practice for shortestpath planning problems (team and arm), but not for feasible path panning (alpha).
Lockfree GaussSieve for linear speedups in parallel high performance SVP calculation
 IN: SBACPAD
, 2014
"... Latticebased cryptography became a hottopic in the past years because it seems to be quantum immune, i.e., resistant to attacks operated with quantum computers. The security of latticebased cryptosystems is determined by the hardness of certain lattice problems, such as the Shortest Vector Pro ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Problem (SVP). Thus, it is of prime importance to study how efficiently SVPsolvers can be implemented. This paper presents a parallel sharedmemory implementation of the GaussSieve algorithm, a well known SVPsolver. Our implementation achieves almost linear and linear speedups with up to 64 cores
A benchmark of NonStop SQL release 2 demonstrating nearlinear speedup and scaleup on large databases
 In Proceedings of the 1990 ACM SIGMETRICS Conference on Measurement and Modeling of Computer Systems
, 1989
"... its second release, NonStop SQL transparently and automatically implements parallelism within an SQL statement. This parallelism allows query execution speed to increase almost linearly as processors and discs are added to the system speedup. In addition, this parallelism can help jobs restricted ..."
Abstract

Cited by 34 (5 self)
 Add to MetaCart
its second release, NonStop SQL transparently and automatically implements parallelism within an SQL statement. This parallelism allows query execution speed to increase almost linearly as processors and discs are added to the system speedup. In addition, this parallelism can help jobs restricted
Linear SpeedUp for a Para lelNon Approximate Recasting of CenterBased Clustering Algorithms, including KMeans,
, 2000
"... multidimensional data clustering, data mining, very large databases, para lelalgorithms, scaleup Data clustering is one of the fundamental techniques in scientific data analysis and data mining. It partitions a data set into groups of similar items, as measured by some distance metric. Over the yea ..."
Abstract
 Add to MetaCart
of multiple machines to bear on a given large problem in order to scale up the largest problem size one can handle. We descibe a technique for para lelizing centerbased data clustering algorithms. The centralidea is to communicate only sufficient statistics, yielding linear speedup with exce lent efficiency
Parallel database systems: the future of high performance database systems
 Communications of the ACM
, 1992
"... Abstract: Parallel database machine architectures have evolved from the use of exotic hardware to a software parallel dataflow architecture based on conventional sharednothing hardware. These new designs provide impressive speedup and scaleup when processing relational database queries. This paper ..."
Abstract

Cited by 638 (13 self)
 Add to MetaCart
Abstract: Parallel database machine architectures have evolved from the use of exotic hardware to a software parallel dataflow architecture based on conventional sharednothing hardware. These new designs provide impressive speedup and scaleup when processing relational database queries. This paper
Results 1  10
of
84,843