Results 1  10
of
16
ERROR BOUND AND CONVERGENCE ANALYSIS OF MATRIX SPLITTING ALGORITHMS FOR THE AFFINE VARIATIONAL INEQUALITY PROBLEM
, 1992
"... Consider the affine variational inequality problem. It is shown that the distance to the solution set from a feasible point near the solution set can be bounded by the norm of a natural residual at that point. This bound is then used to prove linear convergence of a matrix splitting algorithm for so ..."
Abstract

Cited by 52 (7 self)
 Add to MetaCart
Consider the affine variational inequality problem. It is shown that the distance to the solution set from a feasible point near the solution set can be bounded by the norm of a natural residual at that point. This bound is then used to prove linear convergence of a matrix splitting algorithm for solving the symmetric case of the problem. This latter result improves upon a recent result of Luo and Tseng that further assumes the problem to be monotone.
On the linear convergence of descent methods for convex essentially smooth minimization
 SIAM J. Control Optim
, 1992
"... Dedicated to those courageous people who, on June 4, 1989, sacrificed their lives in ..."
Abstract

Cited by 43 (9 self)
 Add to MetaCart
Dedicated to those courageous people who, on June 4, 1989, sacrificed their lives in
Local Convergence of InteriorPoint Algorithms for Degenerate Monotone LCP
 Computational Optimization and Applications
, 1993
"... Most asymptotic convergence analysis of interiorpoint algorithms for monotone linear complementarity problems assumes that the problem is nondegenerate, that is, the solution set contains a strictly complementary solution. We investigate the behavior of these algorithms when this assumption is remo ..."
Abstract

Cited by 38 (5 self)
 Add to MetaCart
(Show Context)
Most asymptotic convergence analysis of interiorpoint algorithms for monotone linear complementarity problems assumes that the problem is nondegenerate, that is, the solution set contains a strictly complementary solution. We investigate the behavior of these algorithms when this assumption is removed. 1 Introduction In the monotone linear complementarity problem (LCP), we seek a vector pair (x; y) 2 IR n \Theta IR n that satisfies the conditions y = Mx+ q; x 0; y 0; x T y = 0; (1) where q 2 IR n , and M 2 IR n\Thetan is positive semidefinite. We use S to denote the solution set of (1). An assumption that is frequently made in order to prove superlinear convergence of interiorpoint algorithms for (1) is the nondegeneracy assumption: Assumption 1 There is an (x ; y ) 2 S such that x i + y i ? 0 for all i = 1; \Delta \Delta \Delta ; n. In general, we can define three subsets B, N , and J of the index set f1; \Delta \Delta \Delta ; ng by B = fi = 1; \Delta ...
Error Bounds for Convex Inequality Systems
 Generalized Convexity
, 1996
"... Using convex analysis, this paper gives a systematic and unified treatment for the existence of a global error bound for a convex inequality system. We establish a necessary and sufficient condition for a closed convex set defined by a closed proper convex function to possess a global error bound in ..."
Abstract

Cited by 32 (0 self)
 Add to MetaCart
(Show Context)
Using convex analysis, this paper gives a systematic and unified treatment for the existence of a global error bound for a convex inequality system. We establish a necessary and sufficient condition for a closed convex set defined by a closed proper convex function to possess a global error bound in terms of a natural residual. We derive many special cases of the main characterization, including the case where a Slater assumption is in place. Our results show clearly the essential conditions needed for convex inequality systems to satisfy global error bounds; they unify and extend a large number of existing results on global error bounds for such systems. The research of this author was based on work supported by the Natural Sciences and Engineering Research Council of Canada. y The research of this author was based on work supported by the National Science Foundation under grant CCR9213739 and the Office of Naval Research under grant N000149310228. 1 Introduction Let f : ! ...
Superlinear convergence of an algorithm for monotone linear complementarity problems, when no strictly complementary solution exists
 Math. Oper. Res
, 1999
"... ..."
(Show Context)
Error Bounds and Strong Upper Semicontinuity for Monotone Affine Variational Inequalities
 Annals of Operations Research
, 1993
"... . Global error bounds for possibly degenerate or nondegenerate monotone affine variational inequality problems are given. The error bounds are on an arbitrary point and are in terms of the distance between the given point and a solution to a convex quadratic program. For the monotone linear compleme ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
(Show Context)
. Global error bounds for possibly degenerate or nondegenerate monotone affine variational inequality problems are given. The error bounds are on an arbitrary point and are in terms of the distance between the given point and a solution to a convex quadratic program. For the monotone linear complementarity problem the convex program is that of minimizing a quadratic function on the nonnegative orthant. These bounds may form the basis of an iterative quadratic programming procedure for solving affine variational inequality problems. A strong upper semicontinuity result is also obtained which may be useful for finitely terminating any convergent algorithm by periodically solving a linear program. Key words. Error bounds, upper semicontinuity, variational inequalities, linear complementarity problem Abbreviated title. Error bounds for variational inequalities 1 Introduction We consider here the monotone affine variational inequality problem [1, 3] of finding an ¯ x in X such that (x ...
On the Identification of Zero Variables in an InteriorPoint Framework
, 1998
"... We consider column sufficient linear complementarity problems and study the problem of identifying those variables that are zero at a solution. To this end we propose a new, computationally inexpensive technique that is based on growth functions. We analyze in detail the theoretical properties of th ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
We consider column sufficient linear complementarity problems and study the problem of identifying those variables that are zero at a solution. To this end we propose a new, computationally inexpensive technique that is based on growth functions. We analyze in detail the theoretical properties of the identification technique and test it numerically. The identification technique is particularly suited to interiorpoint methods but can be applied to a wider class of methods.
Error Bounds For Quadratic Systems
 High Performance Optimization
, 1998
"... In this paper we consider the problem of estimating the distance from a given point to the solution set of a quadratic inequality system. We show, among other things, that a local error bound of order 1=2 holds for a system defined by linear inequalities and a single (nonconvex) quadratic equality. ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
(Show Context)
In this paper we consider the problem of estimating the distance from a given point to the solution set of a quadratic inequality system. We show, among other things, that a local error bound of order 1=2 holds for a system defined by linear inequalities and a single (nonconvex) quadratic equality. We also give a sharpening of Lojasiewicz' error bound for piecewise quadratic functions. In contrast, the early results for this problem further require either a convexity or a nonnegativity assumption. 1 Introduction Consider a set S defined by an inequality system in ! n : S := fx 2 ! n j g 1 (x) 0; g 2 (x) 0; :::; g m (x) 0g (1.1) where each g i : ! n ! ! is a continuous function. We shall denote the vector function (g 1 ; g 2 ; :::; g m ) by g. A set S of the form above is sometimes called a zero set. A residual function r(x) for S is a nonnegative valued vector function with the property that r(x) = 0 if and only if x 2 S. A popular choice of residual function is given by r(x)...
An Interior Point Algorithm For Linearly Constrained Optimization
 Siam J. Optim
, 1992
"... . We describe an algorithm for optimization of a smooth function subject to general linear constraints. An algorithm of the gradient projection class is used, with the important feature that the "projection" at each iteration is performed using a primaldual interior point method for conve ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
. We describe an algorithm for optimization of a smooth function subject to general linear constraints. An algorithm of the gradient projection class is used, with the important feature that the "projection" at each iteration is performed using a primaldual interior point method for convex quadratic programming. Convergence properties can be maintained even if the projection is done inexactly in a welldefined way. Higherorder derivative information on the manifold defined by the apparently active constraints can be used to increase the rate of local convergence. Key words. potential reduction algorithm, gradient porojection algorithm, linearly constrained optimization AMS(MOS) subject classifications. 65K10, 90C30 1. Introduction. We address the problem min x f(x) s.t. A T x b; (1) where x 2 R n and b 2 R m , and f is assumed throughout to be twice continuously differentiable on the level set L = fx j A T x b; f(x) f(x 0 )g; where x 0 is some given initial choice...
Error Bounds for Inconsistent Linear Inequalities and Programs
 Operations Research Letters
, 1994
"... For any system of linear inequalities, consistent or not, the norm of the violations of the inequalities by a given point, multiplied by a condition constant that is independent of the point, bounds the distance between the point and the nonempty set of points that minimize these violations. Similar ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
For any system of linear inequalities, consistent or not, the norm of the violations of the inequalities by a given point, multiplied by a condition constant that is independent of the point, bounds the distance between the point and the nonempty set of points that minimize these violations. Similarly, for a dual pair of possibly infeasible linear programs, the norm of violations of primaldual feasibility and primaldual objective equality, when multiplied by a condition constant, bounds the distance between a given point and the nonempty set of minimizers of these violations. These results extend error bounds for consistent linear inequalities and linear programs to inconsistent systems. Keywords error bounds; linear inequalities; linear programs Error bounds are playing an increasingly important role in mathematical programming. Beginning with Hoffman's classical error bound for linear inequalities [3], many papers have examined error bounds for linear and convex inequalities, line...