Results 1  10
of
16
On the curvature of the central path of linear programming theory
 Foundations of Computational Mathematics
, 2003
"... Abstract. We prove a linear bound on the average total curvature of the central path of linear programming theory in terms on the number of variables. 1 Introduction. In this paper we study the curvature of the central path of linear programming theory. We establish that for a linear programming pro ..."
Abstract

Cited by 19 (3 self)
 Add to MetaCart
(Show Context)
Abstract. We prove a linear bound on the average total curvature of the central path of linear programming theory in terms on the number of variables. 1 Introduction. In this paper we study the curvature of the central path of linear programming theory. We establish that for a linear programming problem defined on a compact polytope contained in R n, the total curvature of the central path is less than or
A variant of the VavasisYe layeredstep interiorpoint algorithm for linear programming
, 2003
"... In this paper we present a variant of Vavasis and Ye’s layeredstep pathfollowing primaldual interiorpoint algorithm for linear programming. Our algorithm is a predictor–correctortype algorithm which uses from time to time the layered least squares (LLS) direction in place of the affine scaling ..."
Abstract

Cited by 13 (6 self)
 Add to MetaCart
In this paper we present a variant of Vavasis and Ye’s layeredstep pathfollowing primaldual interiorpoint algorithm for linear programming. Our algorithm is a predictor–correctortype algorithm which uses from time to time the layered least squares (LLS) direction in place of the affine scaling (AS) direction. It has the same iterationcomplexity bound of Vavasis and Ye’s algorithm, namely O(n3.5 log(χ̄A + n)), where n is the number of nonnegative variables and χ̄A is a certain condition number associated with the constraint matrix A. Vavasis and Ye’s algorithm requires explicit knowledge of χ̄A (which is very hard to compute or even estimate) in order to compute the layers for the LLS direction. In contrast, our algorithm uses the AS direction at the current iterate to determine the layers for the LLS direction, and hence does not require the knowledge of χ̄A. A variant with similar properties and with the same complexity has been developed by Megiddo, Mizuno, and Tsuchiya [Math. Programming, 82 (1998), pp. 339–355]. However, their algorithm needs to compute n LLS directions on every iteration, while ours computes at most one LLS direction on any given iteration.
A new iterationcomplexity bound for the MTY predictorcorrector algorithm
 SIAM Journal on Optimization
"... Abstract. In this paper we present a new iterationcomplexity bound for the Mizuno–Todd–Ye predictorcorrector (MTY PC) primaldual interiorpoint algorithm for linear programming. The analysis of the paper is based on the important notion of crossover events introduced by Vavasis and Ye. For a sta ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper we present a new iterationcomplexity bound for the Mizuno–Todd–Ye predictorcorrector (MTY PC) primaldual interiorpoint algorithm for linear programming. The analysis of the paper is based on the important notion of crossover events introduced by Vavasis and Ye. For a standard form linear program min{cT x: Ax = b, x ≥ 0} with decision variable x ∈ n, we show that the MTY PC algorithm, started from a wellcentered interiorfeasible solution with duality gap nµ0, finds an interiorfeasible solution with duality gap less than nη in O(T (µ0/η)+n3.5 log(χ̄∗A)) iterations, where T (t) ≡ min{n2 log(log t), log t} for all t> 0 and χ̄∗A is a scaling invariant condition number associated with the matrix A. More specifically, χ̄∗A is the infimum of all the conditions numbers χ̄AD, where D varies over the set of positive diagonal matrices. Under the setting of the Turing machine model, our analysis yields an O(n3.5LA + min{n2 logL,L}) iterationcomplexity bound for the MTY PC algorithm to find a primaldual optimal solution, where LA and L are the input sizes of the matrix A and the data (A, b, c), respectively. This contrasts well with the classical iterationcomplexity bound for the MTY PC algorithm, which depends linearly on L instead of logL.
Interior Point Algorithms For Network Flow Problems
 in Advances in linear and integer programming
, 1996
"... . Computational algorithms for the solution of network flow problems are of great practical significance. In the last decade, a new class of computationally efficient algorithms, based on the interior point method, has been proposed and applied to solve large scale network flow problems. In this cha ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
. Computational algorithms for the solution of network flow problems are of great practical significance. In the last decade, a new class of computationally efficient algorithms, based on the interior point method, has been proposed and applied to solve large scale network flow problems. In this chapter, we review interior point approaches for network flows, with emphasis on computational issues. Key words. Network flow problems, interior point methods, computational testing, computer implementation. AMS(MOS) subject classifications. 90B10, 90C05, 90C06, 90C35, 6505, 65F10, 65F50 1. Introduction. A large number of problems in transportation, communications, and manufacturing can be modeled as network flow problems. In these problems one seeks to find the most efficient, or optimal, way to move flow (e.g. materials, information, buses, electrical currents) on a network (e.g. postal network, computer network, transportation grid, power grid). Among these optimization problems, many a...
Developments in Linear and Integer Programming
"... In this review we describe recent developments in linear and integer (linear) programming. For over 50 years Operational Research practitioners have made use of linear optimization models to aid decision making and over this period the size of problems that can be solved has increased dramatically, ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
In this review we describe recent developments in linear and integer (linear) programming. For over 50 years Operational Research practitioners have made use of linear optimization models to aid decision making and over this period the size of problems that can be solved has increased dramatically, the time required to solve problems has decreased substantially and the flexibility of modelling and solving systems has increased steadily. Large models are no longer confined to large computers, and the flexibility of optimization systems embedded in other decision support tools has made online decision making using linear programming a reality (and using integer programming a possibility). The review focuses on recent developments in algorithms, software and applications and investigates some connections between linear optimization and other technologies.
Identifying an Optimal Basis in Linear Programming
 Annals of Operations Research
, 1993
"... In this report we propose some sufficient conditions under which an optimal basis may be identified from a central path point in linear programming. These conditions depend only on the coefficient matrix A and are valid for realnumber data. 1 The central path The problem we consider is linear prog ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
In this report we propose some sufficient conditions under which an optimal basis may be identified from a central path point in linear programming. These conditions depend only on the coefficient matrix A and are valid for realnumber data. 1 The central path The problem we consider is linear programming written in dual format: minimize b T y subject to A T y \Gamma s = c; s 0: (1) Here, A is an m \Theta n matrix assumed to have rank m, b 2 IR m and c 2 IR n are given vectors, and y 2 IR m is the unknown vector. The vector s denotes the slack variables. Let us assume that the set defined by A T y c is bounded and has an interior feasible point. Then we can define the central path point (s(¯); y(¯)), given a parameter ¯ ? 0, to be the unique minimizer of minimize b T y \Gamma ¯ P n i=1 ln s i subject to A T y \Gamma s = c; s ? 0: (2) Department of Computer Science, Upson Hall, Cornell University, Ithaca, NY 14853. Email: vavasis @cs.cornell.edu. This work is ...
An Information Geometric Approach to Polynomialtime Interiorpoint Algorithms — Complexity Bound via Curvature Integral —
, 2009
"... In this paper, we study polynomialtime interiorpoint algorithms in view of information geometry. Information geometry is a differential geometric framework which has been successfully applied to statistics, learning theory, signal processing etc. We consider information geometric structure for con ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
In this paper, we study polynomialtime interiorpoint algorithms in view of information geometry. Information geometry is a differential geometric framework which has been successfully applied to statistics, learning theory, signal processing etc. We consider information geometric structure for conic linear programs introduced by selfconcordant barrier functions, and develop a precise iterationcomplexity estimate of the polynomialtime interiorpoint algorithm based on an integral of (embedding) curvature of the central trajectory in a rigorous differential geometrical sense. We further study implication of the theory applied to classical linear programming, and establish a surprising link to the strong “primaldual curvature ” integral bound established by Monteiro and Tsuchiya, which is based on the work of Vavasis and Ye of the layeredstep interiorpoint algorithm. By using these results, we can show that the total embedding curvature of the central trajectory, i.e., the aforementioned integral over the whole central trajectory, is bounded by O(n3.5 log(¯χ ∗ A + n)) where ¯χ ∗ A is a condition number of the coefficient matrix A and n is the number of nonnegative variables. In particular, the integral is bounded by O(n4.5m) for combinatorial linear programs including network flow problems where m is the number of constraints. We also provide a complete differentialgeometric characterization of the primaldual curvature in the primaldual algorithm. Finally, in view of this integral bound, we observe that the primal (or dual) interiorpoint algorithm requires fewer number of iterations than the primaldual interiorpoint algorithm at least in the case of linear programming.
Progress in Linear Programming: InteriorPoint Algorithms
, 1994
"... Abstract: According to current estimates, more than $100 million in human and computer time is invested yearly in the formulation and solution of linear programming problems. Businesses, large and small, use linear programming models to optimize communication systems, to schedule transportation netw ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract: According to current estimates, more than $100 million in human and computer time is invested yearly in the formulation and solution of linear programming problems. Businesses, large and small, use linear programming models to optimize communication systems, to schedule transportation networks, to control inventories, to plan investments, and to maximize production.... In this article we describe some recent developments in linear programming. We highlight progress in interiorpoint algorithms during the last ten years.
An Analysis of Weighted Least Squares Method and Layered Least Squares Method with the Basis Block Lower Triangular Matrix Form ∗
, 2008
"... In this paper, we analyze the limiting behavior of the weighted least squares problem minx∈< n Pp i=1 kDi(Aix − bi)k2,whereeachDi is a positive definite diagonal matrix. We consider the situation where the magnitude of the weights are drastically different blockwisely so that max(D1) ≥ min(D1) ..."
Abstract
 Add to MetaCart
(Show Context)
In this paper, we analyze the limiting behavior of the weighted least squares problem minx∈< n Pp i=1 kDi(Aix − bi)k2,whereeachDi is a positive definite diagonal matrix. We consider the situation where the magnitude of the weights are drastically different blockwisely so that max(D1) ≥ min(D1) À max(D2) ≥ min(D2) À max(D3) ≥... À max(Dp−1) ≥ min(Dp−1) À max(Dp). Here max(·) and min(·) represents the maximum and minimum entries of diagonal elements, respectively. Specifically, we consider the case when the gap g ≡ mini 1/(kD −1 i kkDi+1k) is very large or tends to infinity. Vavasis and Ye proved that the limiting solution exists (when the proportion of diagonal elements within each block Di is unchanged and only the gap g tends to ∞), and showed that the limit is characterized as the solution of a variant of the least squares problem called the layered least squares (LLS) problem. We analyze the difference between the solutions of WLS and LLS quantitatively and show that the norm of the difference of the two solutions is bounded above by O(χA ¯χ 2(p+1) A g−2kbk)