Results 1  10
of
58
Interiorpoint Methods
, 2000
"... The modern era of interiorpoint methods dates to 1984, when Karmarkar proposed his algorithm for linear programming. In the years since then, algorithms and software for linear programming have become quite sophisticated, while extensions to more general classes of problems, such as convex quadrati ..."
Abstract

Cited by 612 (15 self)
 Add to MetaCart
(Show Context)
The modern era of interiorpoint methods dates to 1984, when Karmarkar proposed his algorithm for linear programming. In the years since then, algorithms and software for linear programming have become quite sophisticated, while extensions to more general classes of problems, such as convex quadratic programming, semidefinite programming, and nonconvex and nonlinear problems, have reached varying levels of maturity. We review some of the key developments in the area, including comments on both the complexity theory and practical algorithms for linear programming, semidefinite programming, monotone linear complementarity, and convex programming over sets that can be characterized by selfconcordant barrier functions.
Smoothed analysis of algorithms: why the simplex algorithm usually takes polynomial time
, 2003
"... We introduce the smoothed analysis of algorithms, which continuously interpolates between the worstcase and averagecase analyses of algorithms. In smoothed analysis, we measure the maximum over inputs of the expected performance of an algorithm under small random perturbations of that input. We me ..."
Abstract

Cited by 202 (12 self)
 Add to MetaCart
We introduce the smoothed analysis of algorithms, which continuously interpolates between the worstcase and averagecase analyses of algorithms. In smoothed analysis, we measure the maximum over inputs of the expected performance of an algorithm under small random perturbations of that input. We measure this performance in terms of both the input size and the magnitude of the perturbations. We show that the simplex algorithm has smoothed complexity polynomial in the input size and the standard deviation of
AVERAGECASE STABILITY OF GAUSSIAN ELIMINATION
, 1990
"... Gaussian elimination with partial pivoting is unstable in the worst case: the "growth factor" can be as large as 2" l, where n is the matrix dimension, resulting in a loss of n bits of precision. It is proposed that an averagecase analysis can help explain why it is nevertheless sta ..."
Abstract

Cited by 54 (2 self)
 Add to MetaCart
Gaussian elimination with partial pivoting is unstable in the worst case: the "growth factor" can be as large as 2" l, where n is the matrix dimension, resulting in a loss of n bits of precision. It is proposed that an averagecase analysis can help explain why it is nevertheless stable in practice. The results presented begin with the observation that for many distributions of matrices, the matrix elements after the first few steps of elimination are approximately normally distributed. From here, with the aid of estimates from extreme value statistics, reasonably accurate predictions ofthe average magnitudes of elements, pivots, multipliers, and growth factors are derived. For various distributions of matrices with dimensions n =< 1024, the average growth factor (normalized by the standard deviation of the initial matrix elements) is within a few percent of n 2/3 for partial pivoting and approximately n 1/2 for complete pivoting. The average maximum element of the residual with both kinds of pivoting appears to be of magnitude O(n), as compared with O(n /2) for QR factorization. The experiments and analysis presented show that small multipliers alone are not enough to explain the averagecase stability of Gaussian elimination; it is also important that the correction introduced in the remaining matrix at each elimination step is of rank 1. Because of this lowrank property, the signs of the elements and multipliers in Gaussian elimination are not independent, but are interrelated in such a way as to retard growth. By contrast, alternative pivoting strategies involving highrank corrections are sometimes unstable even though the multipliers are small.
An update on the Hirsch conjecture,
 Jahresber. Dtsch. Math.Ver.
, 2010
"... Abstract The Hirsch Conjecture (1957) stated that the graph of a ddimensional polytope with n facets cannot have (combinatorial) diameter greater than n − d. That is, any two vertices of the polytope can be connected by a path of at most n − d edges. This paper presents the first counterexample t ..."
Abstract

Cited by 41 (3 self)
 Add to MetaCart
(Show Context)
Abstract The Hirsch Conjecture (1957) stated that the graph of a ddimensional polytope with n facets cannot have (combinatorial) diameter greater than n − d. That is, any two vertices of the polytope can be connected by a path of at most n − d edges. This paper presents the first counterexample to the conjecture. Our polytope has dimension 43 and 86 facets. It is obtained from a 5dimensional polytope with 48 facets that violates a certain generalization of the dstep conjecture of Klee and Walkup.
Linear Programming, the Simplex Algorithm and Simple Polytopes
 Math. Programming
, 1997
"... In the first part of the paper we survey some farreaching applications of the basic facts of linear programming to the combinatorial theory of simple polytopes. In the second part we discuss some recent developments concerning the simplex algorithm. We describe subexponential randomized pivot ru ..."
Abstract

Cited by 31 (1 self)
 Add to MetaCart
(Show Context)
In the first part of the paper we survey some farreaching applications of the basic facts of linear programming to the combinatorial theory of simple polytopes. In the second part we discuss some recent developments concerning the simplex algorithm. We describe subexponential randomized pivot rules and upper bounds on the diameter of graphs of polytopes. 1 Introduction: A convex polyhedron is the intersection P of a finite number of closed halfspaces in R d . P is a ddimensional polyhedron (briefly, a dpolyhedron) if the points in P affinely span R d . A convex ddimensional polytope (briefly, a dpolytope) is a bounded convex dpolyhedron. Alternatively, a convex dpolytope is the convex hull of a finite set of points which affinely span R d . A (nontrivial) face F of a dpolyhedron P is the intersection of P with a supporting hyperplane. F itself is a polyhedron of some lower dimension. If the dimension of F is k we call F a kface of P . The empty set and P itself are...
A randomized polynomialtime simplex algorithm for linear programming
 IN STOC
, 2006
"... We present the first randomized polynomialtime simplex algorithm for linear programming. Like the other known polynomialtime algorithms for linear programming, its running time depends polynomially on the number of bits used to represent its input. We begin by reducing the input linear program to ..."
Abstract

Cited by 28 (5 self)
 Add to MetaCart
We present the first randomized polynomialtime simplex algorithm for linear programming. Like the other known polynomialtime algorithms for linear programming, its running time depends polynomially on the number of bits used to represent its input. We begin by reducing the input linear program to a special form in which we merely need to certify boundedness. As boundedness does not depend upon the righthandside vector, we run the shadowvertex simplex method with a random righthandside vector. Thus, we do not need to bound the diameter of the original polytope. Our analysis rests on a geometric statement of independent interest: given a polytope Ax ≤ b in isotropic position, if one makes a polynomially small perturbation to b then the number of edges of the projection of the perturbed polytope onto a random 2dimensional subspace is expected to be polynomial.
CrissCross Methods: A Fresh View on Pivot Algorithms
 Mathematical Programming
, 1997
"... this paper is to present mathematical ideas and ..."
Smoothed Analysis of Termination of Linear Programming Algorithms
"... We perform a smoothed analysis of a termination phase for linear programming algorithms. By combining this analysis with the smoothed analysis of Renegar’s condition number by Dunagan, Spielman and Teng ..."
Abstract

Cited by 23 (3 self)
 Add to MetaCart
We perform a smoothed analysis of a termination phase for linear programming algorithms. By combining this analysis with the smoothed analysis of Renegar’s condition number by Dunagan, Spielman and Teng