Results 1  10
of
459
Randomized Algorithms
, 1995
"... Randomized algorithms, once viewed as a tool in computational number theory, have by now found widespread application. Growth has been fueled by the two major benefits of randomization: simplicity and speed. For many applications a randomized algorithm is the fastest algorithm available, or the simp ..."
Abstract

Cited by 2196 (36 self)
 Add to MetaCart
Randomized algorithms, once viewed as a tool in computational number theory, have by now found widespread application. Growth has been fueled by the two major benefits of randomization: simplicity and speed. For many applications a randomized algorithm is the fastest algorithm available, or the simplest, or both. A randomized algorithm is an algorithm that uses random numbers to influence the choices it makes in the course of its computation. Thus its behavior (typically quantified as running time or quality of output) varies from
Proportionate progress: A notion of fairness in resource allocation
 Algorithmica
, 1996
"... Given a set of n tasks and m resources, where each task x has a rational weight x:w = x:e=x:p; 0 < x:w < 1, a periodic schedule is one that allocates a resource to a task x for exactly x:e time units in each interval [x:p k; x:p (k + 1)) for all k 2 N. We de ne a notion of proportionate progre ..."
Abstract

Cited by 322 (25 self)
 Add to MetaCart
(Show Context)
Given a set of n tasks and m resources, where each task x has a rational weight x:w = x:e=x:p; 0 < x:w < 1, a periodic schedule is one that allocates a resource to a task x for exactly x:e time units in each interval [x:p k; x:p (k + 1)) for all k 2 N. We de ne a notion of proportionate progress, called Pfairness, and use it to design an e cient algorithm which solves the periodic scheduling problem. Keywords: Euclid's algorithm, fairness, network ow, periodic scheduling, resource allocation.
Linear programming in linear time when the dimension is fixed
, 1984
"... It is demonstrated hat he linear programming problem in d variables and n constraints can be solved in O(n) time when d is fixed. This bound follows from a multidimensional search technique which is applicable for quadratic programming as well. There is also developed an algorithm that is polynomia ..."
Abstract

Cited by 213 (10 self)
 Add to MetaCart
(Show Context)
It is demonstrated hat he linear programming problem in d variables and n constraints can be solved in O(n) time when d is fixed. This bound follows from a multidimensional search technique which is applicable for quadratic programming as well. There is also developed an algorithm that is polynomial in both n and d provided is bounded by a certain slowly growing function of n.
Fast algorithms for sorting and searching strings. In:
 ACM (ed.) 8th Symposium on Discrete Algorithms (SODA),
, 1997
"... Abstract We present theoretical algorithms for sorting and searching multikey data, and derive from them practical C implementations for applications in which keys are character strings. The sorting algorithm blends Quicksort and radix sort; it is competitive with the best known C sort codes. The s ..."
Abstract

Cited by 165 (0 self)
 Add to MetaCart
(Show Context)
Abstract We present theoretical algorithms for sorting and searching multikey data, and derive from them practical C implementations for applications in which keys are character strings. The sorting algorithm blends Quicksort and radix sort; it is competitive with the best known C sort codes. The searching algorithm blends tries and binary search trees; it is faster than hashing and other commonly used search methods. The basic ideas behind the algorithms date back at least to the 1960s, but their practical utility has been overlooked. We also present extensions to more complex string problems, such as partialmatch searching.
StaticPriority Scheduling on Multiprocessors
 In Proc. 22nd IEEE RealTime Systems Symposium
, 2001
"... The preemptive scheduling of systems of periodic tasks on a platform comprised of several identical multiprocessors is considered. A scheduling algorithm is proposed for staticpriority scheduling of such systems; this algorithm is a simple extension of the uniprocessor ratemonotonic scheduling algo ..."
Abstract

Cited by 138 (17 self)
 Add to MetaCart
(Show Context)
The preemptive scheduling of systems of periodic tasks on a platform comprised of several identical multiprocessors is considered. A scheduling algorithm is proposed for staticpriority scheduling of such systems; this algorithm is a simple extension of the uniprocessor ratemonotonic scheduling algorithm. It is proven that this algorithm successfully schedules any periodic task system with a worstcase utilization no more than a third the capacity of the multiprocessor platform; for the special case of harmonic periodic task systems, the algorithm is proven to successfully schedule any system with a worstcase utilization of no more than half the platform capacity.
Fast Scheduling of Periodic Tasks on Multiple Resources
 In Proceedings of the 9th International Parallel Processing Symposium
"... Given n periodic tasks, each characterized by an execution requirement and a period, and m identical copies of a resource, the periodic scheduling problem is concerned with generating a schedule for the n tasks on the m resources. We present an algorithm that schedules every feasible instance of t ..."
Abstract

Cited by 129 (15 self)
 Add to MetaCart
Given n periodic tasks, each characterized by an execution requirement and a period, and m identical copies of a resource, the periodic scheduling problem is concerned with generating a schedule for the n tasks on the m resources. We present an algorithm that schedules every feasible instance of the periodic scheduling problem, and runs in O(minfm lg n; ng) time per slot scheduled. 1 Introduction Given a set \Gamma of n tasks, where each task x is characterized by two integer parameters x:e and x:p, and m identical copies of a resource, a periodic schedule is one that allocates a resource to each task x in \Gamma for exactly x:e time units in each interval [k \Delta x:p; (k+1) \Delta x:p) for all k in N, subject to the following constraints: Constraint 1: A resource can only be allocated to a task for an entire "slot" of time, where for each i in N slot i is the unit interval from time i to time i + 1. Constraint 2: No task may be allocated more than one copy of the resource ...
Geometric Mesh Partitioning: Implementation and Experiments
"... We investigate a method of dividing an irregular mesh into equalsized pieces with few interconnecting edges. The method’s novel feature is that it exploits the geometric coordinates of the mesh vertices. It is based on theoretical work of Miller, Teng, Thurston, and Vavasis, who showed that certain ..."
Abstract

Cited by 112 (20 self)
 Add to MetaCart
(Show Context)
We investigate a method of dividing an irregular mesh into equalsized pieces with few interconnecting edges. The method’s novel feature is that it exploits the geometric coordinates of the mesh vertices. It is based on theoretical work of Miller, Teng, Thurston, and Vavasis, who showed that certain classes of “wellshaped” finite element meshes have good separators. The geometric method is quite simple to implement: we describe a Matlab code for it in some detail. The method is also quite efficient and effective: we compare it with some other methods, including spectral bisection.
How to Summarize the Universe: Dynamic Maintenance of Quantiles
 In VLDB
, 2002
"... Order statistics, i.e., quantiles, are frequently used in databases both at the database server as well as the application level. For example, they are useful in selectivity estimation during query optimization, in partitioning large relations, in estimating query result sizes when building us ..."
Abstract

Cited by 112 (15 self)
 Add to MetaCart
(Show Context)
Order statistics, i.e., quantiles, are frequently used in databases both at the database server as well as the application level. For example, they are useful in selectivity estimation during query optimization, in partitioning large relations, in estimating query result sizes when building user interfaces, and in characterizing the data distribution of evolving datasets in the process of data mining.
RANDOM SAMPLING IN CUT, FLOW, AND NETWORK DESIGN PROBLEMS
, 1999
"... We use random sampling as a tool for solving undirected graph problems. We show that the sparse graph, or skeleton, that arises when we randomly sample a graph’s edges will accurately approximate the value of all cuts in the original graph with high probability. This makes sampling effective for pro ..."
Abstract

Cited by 101 (12 self)
 Add to MetaCart
(Show Context)
We use random sampling as a tool for solving undirected graph problems. We show that the sparse graph, or skeleton, that arises when we randomly sample a graph’s edges will accurately approximate the value of all cuts in the original graph with high probability. This makes sampling effective for problems involving cuts in graphs. We present fast randomized (Monte Carlo and Las Vegas) algorithms for approximating and exactly finding minimum cuts and maximum flows in unweighted, undirected graphs. Our cutapproximation algorithms extend unchanged to weighted graphs while our weightedgraph flow algorithms are somewhat slower. Our approach gives a general paradigm with potential applications to any packing problem. It has since been used in a nearlinear time algorithm for finding minimum cuts, as well as faster cut and flow algorithms. Our sampling theorems also yield faster algorithms for several other cutbased problems, including approximating the best balanced cut of a graph, finding a kconnected orientation of a 2kconnected graph, and finding integral multicommodity flows in graphs with a great deal of excess capacity. Our methods also improve the efficiency of some parallel cut and flow algorithms. Our methods also apply to the network design problem, where we wish to build a network satisfying certain connectivity requirements between vertices. We can purchase edges of various costs and wish to satisfy the requirements at minimum total cost. Since our sampling theorems apply even when the sampling probabilities are different for different edges, we can apply randomized rounding to solve network design problems. This gives approximation algorithms that guarantee much better approximations than previous algorithms whenever the minimum connectivity requirement is large. As a particular example, we improve the best approximation bound for the minimum kconnected subgraph problem from 1.85 to 1 � O(�log n)/k).
Random sampling techniques for space efficient online computation of order statistics of large datasets
 IN ACM SIGMOD '99
, 1999
"... In a recent paper [MRL98], we had described a general framework for single pass approximate quantile nding algorithms. This framework included several known algorithms as special cases. We had identi ed a new algorithm, within the framework, which had a signi cantly smaller requirement for main memo ..."
Abstract

Cited by 101 (1 self)
 Add to MetaCart
In a recent paper [MRL98], we had described a general framework for single pass approximate quantile nding algorithms. This framework included several known algorithms as special cases. We had identi ed a new algorithm, within the framework, which had a signi cantly smaller requirement for main memory than other known algorithms. In this paper, we address two issues left open in our earlier paper. First, all known and space e cient algorithms for approximate quantile nding require advance knowledge of the length of the input sequence. Many important database applications employing quantiles cannot provide this information. In this paper, we present anovel nonuniform random sampling scheme and an extension of our framework. Together, they form the basis of a new algorithm which computes approximate quantiles without knowing the input sequence length. Second, if the desired quantile is an extreme value (e.g., within the top 1 % of the elements), the space requirements of currently known algorithms are overly pessimistic. We provide a simple algorithm which estimates extreme values using less space than required by the earlier more general technique for computing all quantiles. Our principal observation here is that random sampling is quanti ably better when estimating extreme values than is the case with the median.