Results 1  10
of
505
Least angle regression
, 2004
"... The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to s ..."
Abstract

Cited by 1326 (37 self)
 Add to MetaCart
to select a parsimonious set for the efficient prediction of a response variable. Least Angle Regression (LARS), a new model selection algorithm, is a useful and less greedy version of traditional forward selection methods. Three main properties are derived: (1) A simple modification of the LARS algorithm
A fast learning algorithm for deep belief nets
 Neural Computation
, 2006
"... We show how to use “complementary priors ” to eliminate the explaining away effects that make inference difficult in denselyconnected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a ..."
Abstract

Cited by 970 (49 self)
 Add to MetaCart
at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that finetunes the weights using a contrastive version of the wakesleep algorithm. After finetuning, a network with three hidden layers forms a
Dynamics of Random Early Detection
 In Proceedings of ACM SIGCOMM
, 1997
"... In this paper we evaluate the effectiveness of Random Early Detection (RED) over traffic types categorized as nonadaptive, fragile and robust, according to their responses to congestion. We point out that RED allows unfair bandwidth sharing when a mixture of the three traffic types shares a link Thi ..."
Abstract

Cited by 465 (1 self)
 Add to MetaCart
This urlfairness is caused by the fact that at any given time RED imposes the same loss rate on all jlows, regardless of their bandwidths. We propose Fair Random Early Drop (FRED), a modified version of RED. FRED uses peractivejlow accounting to impose 011 each flow a loss rate that depends on the flow’s buffer
Policy gradient methods for reinforcement learning with function approximation.
 In NIPS,
, 1999
"... Abstract Function approximation is essential to reinforcement learning, but the standard approach of approximating a value function and determining a policy from it has so far proven theoretically intractable. In this paper we explore an alternative approach in which the policy is explicitly repres ..."
Abstract

Cited by 439 (20 self)
 Add to MetaCart
that the gradient can be written in a form suitable for estimation from experience aided by an approximate actionvalue or advantage function. Using this result, we prove for the first time that a version of policy iteration with arbitrary differentiable function approximation is convergent to a locally optimal
Greedy strikes back: Improved facility location algorithms
 Journal of Algorithms
, 1999
"... A fundamental facility location problem is to choose the location of facilities, such as industrial plants and warehouses, to minimize the cost of satisfying the demand for some commodity. There are associated costs for locating the facilities, as well as transportation costs for distributing the co ..."
Abstract

Cited by 210 (11 self)
 Add to MetaCart
]. Recently, the first constant factor approximation algorithm for this problem was obtained by Shmoys, Tardos and Aardal [16]. We show that a simple greedy heuristic combined with the algorithm by Shmoys, Tardos and Aardal, can be used to obtain an approximation guarantee of 2.408. We discuss a few variants
Greedy vector quantization
"... We investigate the greedy version of the Lpoptimal vector quantization problem for an Rdvalued random vector X ∈ Lp. We show the existence of a sequence (aN)N≥1 such that aN minimizes a 7 → ∥∥min1≤i≤N−1 X−ai  ∧ X−a∥∥Lp (Lpmean quantization error at level N induced by (a1,..., aN−1, a)). We s ..."
Abstract
 Add to MetaCart
We investigate the greedy version of the Lpoptimal vector quantization problem for an Rdvalued random vector X ∈ Lp. We show the existence of a sequence (aN)N≥1 such that aN minimizes a 7 → ∥∥min1≤i≤N−1 X−ai  ∧ X−a∥∥Lp (Lpmean quantization error at level N induced by (a1,..., aN−1, a)). We
Greedy Randomized Adaptive Search Procedures For The Steiner Problem In Graphs
 QUADRATIC ASSIGNMENT AND RELATED PROBLEMS, VOLUME 16 OF DIMACS SERIES ON DISCRETE MATHEMATICS AND THEORETICAL COMPUTER SCIENCE
, 1999
"... We describe four versions of a Greedy Randomized Adaptive Search Procedure (GRASP) for finding approximate solutions of general instances of the Steiner Problem in Graphs. Di#erent construction and local search algorithms are presented. Preliminary computational results with one of the versions ..."
Abstract

Cited by 123 (31 self)
 Add to MetaCart
We describe four versions of a Greedy Randomized Adaptive Search Procedure (GRASP) for finding approximate solutions of general instances of the Steiner Problem in Graphs. Di#erent construction and local search algorithms are presented. Preliminary computational results with one
Greedy Facility Location Algorithms analyzed using Dual Fitting with FactorRevealing LP
 Journal of the ACM
, 2001
"... We present a natural greedy algorithm for the metric uncapacitated facility location problem and use the method of dual fitting to analyze its approximation ratio, which turns out to be 1.861. The running time of our algorithm is O(m log m), where m is the total number of edges in the underlying c ..."
Abstract

Cited by 146 (12 self)
 Add to MetaCart
We present a natural greedy algorithm for the metric uncapacitated facility location problem and use the method of dual fitting to analyze its approximation ratio, which turns out to be 1.861. The running time of our algorithm is O(m log m), where m is the total number of edges in the underlying
Greedy and Randomized Versions of the Multiplicative Schwarz Method
, 2011
"... We consider sequential, i.e., GaussSeidel type, subspace correction methods for the iterative solution of symmetric positive definite variational problems, where the order of subspace correction steps is not deterministically fixed as in standard multiplicative Schwarz methods. Here, we greedily ch ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
.g., the condition number, of the underlying space splitting. To avoid the additional computational cost associated with the greedy pick, we alternatively consider choosing the next subspace randomly, and show similar estimates for the expected error reduction. We give some numerical examples, in particular
Greedy Selfish Network Creation∗ (Full Version)
"... We introduce and analyze greedy equilibria (GE) for the wellknown model of selfish network creation by Fabrikant et al. [PODC’03]. GE are interesting for two reasons: (1) they model outcomes found by agents which prefer smooth adaptations over radical strategychanges, (2) GE are outcomes found by ..."
Abstract
 Add to MetaCart
time agents are likely not to play optimally. But how good are networks created by such agents? We answer this question for very simple agents. Quite surprisingly, naive greedy play suffices to create remarkably stable networks. Specifically, we show that in the Sum version, where agents attempt to minimize
Results 1  10
of
505