Results 11  20
of
22
Detecting Infeasibility in InfeasibleInteriorPoint Methods for Optimization
 Foundations of Computational Mathematics, Minneapolis 2002, London Mathematical Society Lecture Note Series 312
, 2003
"... We study interiorpoint methods for optimization problems in the case of infeasibility or unboundedness. While many such methods are designed to search for optimal solutions even when they do not exist, we show that they can be viewed as implicitly searching for welldefined optimal solutions to rel ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
(Show Context)
We study interiorpoint methods for optimization problems in the case of infeasibility or unboundedness. While many such methods are designed to search for optimal solutions even when they do not exist, we show that they can be viewed as implicitly searching for welldefined optimal solutions to related problems whose optimal solutions give certificates of infeasibility for the original problem or its dual. Our main development is in the context of linear programming, but we also discuss extensions to more general convex programming problems.
Simplified Analysis of an O(nL)Iteration Infeasible PredictorCorrector PathFollowing Method for Monotone LCP
, 1994
"... We give a simplified analysis of an infeasible predictorcorrector pathfollowing method for solving monotone linear complementarity problem. This method, like those studied by Mizuno et al. and by Potra and Sheng, (i) requires two factorizations and two backsolves per iteration, (ii) can find a sol ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
We give a simplified analysis of an infeasible predictorcorrector pathfollowing method for solving monotone linear complementarity problem. This method, like those studied by Mizuno et al. and by Potra and Sheng, (i) requires two factorizations and two backsolves per iteration, (ii) can find a solution in O(&radic;nL) or O(nL) iterations, depending on the quality of the starting point, and (iii) has local quadratic convergence, provided a strictly complementary solution exists. The method decreases the centering parameter and the infeasibility at both predictor step and corrector step, and it is flexible in that either a primalscaling or dualscaling or primaldual scaling can be used for the corrector step without affecting the global and local convergence properties of the method.
Extending Mehrotra's Corrector for Linear Programs
 and Optimization
, 1999
"... In this article a primaldual interiorpoint method for solving linear programs is proposed. A new approach for generating higherorder search directions, and a new method for an efficient higherorder subspace search along several search directions are the basis of the proposed extension. The subsp ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
In this article a primaldual interiorpoint method for solving linear programs is proposed. A new approach for generating higherorder search directions, and a new method for an efficient higherorder subspace search along several search directions are the basis of the proposed extension. The subspace search is reduced to a linear program in several variables. The method using the simplest (twodimensional) subprograms is a slight variation of Mehrotra's predictorcorrector method, and is thus known to be practically very efficient. The higherdimensional subproblems are guaranteed to be at least as effective  with respect to a certain measure  as the twodimensional ones. Numerical experiments with the PCxpackage indicate that also in practice the higher order subspace search is very cheap and efficient.
An InfeasibleInteriorPoint PotentialReduction Algorithm for Linear Programming
 Math. Progr
, 1999
"... This paper studies a new potentialfunction and an infeasibleinteriorpoint method based on this function for the solution of linear programming problems. This work is motivated by the apparent gap between the algorithms with the best worstcase complexity and their most successful implementations. ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
This paper studies a new potentialfunction and an infeasibleinteriorpoint method based on this function for the solution of linear programming problems. This work is motivated by the apparent gap between the algorithms with the best worstcase complexity and their most successful implementations. For example, analyses of the algorithms are usually carried out by imposing several regularity assumptions on the problem, but implementations can solve problems which do not satisfy these assumptions. Furthermore, most stateoftheart implementations incorporate heuristic tricks, but these modifications are rarely addressed in the theoretical analysis of the algorithms. The method described here and its analysis have the flexibility to integrate any heuristic technique for implementation while maintaining the important polynomial complexity feature. Key words: linear programming, potential functions, infeasibleinteriorpoint methods, homogeneity, selfdual. AMS Subject classification: 9...
A Second FullNewton Step O(n) Infeasible InteriorPoint Algorithm for Linear Optimization ∗
, 2005
"... In [4] the second author presented a new primaldual infeasible interiorpoint algorithm that uses fullNewton steps and whose iteration bound coincides with the best known bound for infeasible interiorpoint algorithms. Each iteration consists of a step that restores the feasibility for an intermed ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
In [4] the second author presented a new primaldual infeasible interiorpoint algorithm that uses fullNewton steps and whose iteration bound coincides with the best known bound for infeasible interiorpoint algorithms. Each iteration consists of a step that restores the feasibility for an intermediate problem (the socalled feasibility step) and a few (usual) centering steps. No more than O(n log(n/ε)) iterations are required for getting an εsolution of the problem at hand, which coincides with the best known bound for infeasible interiorpoint algorithms. In this paper we use a different feasibility step and show that with a simpler analysis the same result can be obtained.
On Superlinear Convergence of InfeasibleInteriorPoint Algorithms for Linearly Constrained Convex Programs
 Computational Optimization and Applications
, 1996
"... This note derives bounds on the length of the primaldual affine scaling directions associated with a linearly constrained convex program satisfying the following conditions: 1) the problem has a solution satisfying strict complementarity, 2) the Hessian of the objective function satisfies a certain ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
This note derives bounds on the length of the primaldual affine scaling directions associated with a linearly constrained convex program satisfying the following conditions: 1) the problem has a solution satisfying strict complementarity, 2) the Hessian of the objective function satisfies a certain invariance property. We illustrate the usefulness of these bounds by establishing the superlinear convergence of the algorithm presented in Wright and Ralph [22] for solving the optimality conditions associated with a linearly constrained convex program satisfying the above conditions. 1 Introduction During the past few years, we have seen the appearance of many papers dealing with primaldual (feasible and infeasible) interior point algorithms for linear programs (LP), convex quadratic programs (QP), monotone linear complementarity problems (LCP) and monotone nonlinear complementarity problems (NCP) that are superlinearly or quadratically convergent. For LP and QP, these works include [1,...
New Complexity Analysis of IIPMs for Linear Optimization Based on a Specific SelfRegular Function
, 2005
"... Primaldual InteriorPoint Methods (IPMs) have shown their ability in solving large classes of optimization problems efficiently. Feasible IPMs require a strictly feasible starting point to generate the iterates that converge to an optimal solution. The selfdual embedding model provides an elegant ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Primaldual InteriorPoint Methods (IPMs) have shown their ability in solving large classes of optimization problems efficiently. Feasible IPMs require a strictly feasible starting point to generate the iterates that converge to an optimal solution. The selfdual embedding model provides an elegant solution to this problem with the cost of slightly increasing the size of the problem. On the other hand, Infeasible Interior Point Methods (IIPMs) can be initiated by any positive vector, and thus are popular in IPM softwares. In this paper we propose an adaptive largeupdate IIPM based on a specific selfregular proximity function, with barrier degree � 1 + log n, that operates in the infinity neighborhood of the central path. An O n 3 2 log n log n ɛ worstcase iteration bound of our new algorithm is established. This iteration bound improves the so far best O � n2 log n ɛ iterations bound of IIPMs
Polynomial Convergence of PredictorCorrector for SDLCP Based on the MZ Family of Directions
"... We establishes the polynomial convergence of a new class of pathfollowing methods for semidefinite linear complementarity problems whose search directions belong to the class of directions introduced by Monteiro [6]. Namely, we show that the polynomial iterationcomplexity bound of the well known al ..."
Abstract
 Add to MetaCart
(Show Context)
We establishes the polynomial convergence of a new class of pathfollowing methods for semidefinite linear complementarity problems whose search directions belong to the class of directions introduced by Monteiro [6]. Namely, we show that the polynomial iterationcomplexity bound of the well known algorithms for linear programming, namely the predictorcorrector algorithm of Mizuno and Ye, carry over to the context of SDLCP.
unknown title
"... A fullNewton step infeasible interiorpoint algorithm for linear programming based on a kernel function ..."
Abstract
 Add to MetaCart
(Show Context)
A fullNewton step infeasible interiorpoint algorithm for linear programming based on a kernel function