#### DMCA

## Local Convergence of an Inexact-Restoration Method and Numerical Experiments ∗ (2005)

### Citations

5411 | Convex Analysis
- Rockafellar
- 1970
(Show Context)
Citation Context ...il. 1spoint, as in several sequential quadratic programming algorithms (see, for example, [13]) but in the inexactly restored point. The merit function used in [1] is a sharp Lagrangian as defined in =-=[14]-=-, Example 11.58. Merit functions are useful tools in all branches of optimization. However, it has been observed that in many practical situations the performance of optimization algorithms that do no... |

252 | Nonlinear programming without a penalty function”.
- Fletcher, Leyffer
- 2002
(Show Context)
Citation Context ... last decade. In nonlinear programming, the more consistent strategy for globalizing algorithms without the use of merit functions seems to be the filter technique introduced by Fletcher and Leyffer (=-=[16]-=-). Gonzaga, Karas and Vanti ([17]) applied the filter strategy to an algorithm that resembles Inexact Restoration. Previous attempts of eliminating merit functions as globalization tools for semifeasi... |

213 |
The gradient projection method for nonlinear programming, part 1: linear constraints.
- Rosen
- 1960
(Show Context)
Citation Context ... : IR n → IR m are smooth functions and Ω ⊂ IR n is a (generally simple) closed and convex set. Inexact restoration (IR) methods (see [1, 2, 3]) are modern versions of the classical feasible methods (=-=[4, 5, 6, 7, 8, 9, 10, 11, 12]-=-) for nonlinear programming. The main iteration of an IR algorithm consists of two phases: in the restoration phase, infeasibility is reduced and in the optimality phase a Lagrangian function is appro... |

188 | CUTE: Constrained and unconstrained testing environments,
- Bongartz, Conn, et al.
- 1995
(Show Context)
Citation Context ...lgorithms for Nonlinear Programming from the point of view of robustness? We selected all the nonlinearly constrained problems with quadratic or nonlinear objective function from the CUTE collection (=-=[29]-=-). The initial points were the ones provided with the problem definition and the initial estimate for the multiplier vector was λ0 = 0. The algorithmic parameters selected were: θ = η = 0.99, K1 = K2 ... |

92 | A globally convergent augmented Lagrangian algorithm for optimization with general constraints and simple bounds,”
- Conn, Gould, et al.
- 1991
(Show Context)
Citation Context ... vector was λ0 = 0. The algorithmic parameters selected were: θ = η = 0.99, K1 = K2 = 10 6 , ˜ K3 = 0.1, and ε = 10 −4 . The globally convergent method selected for numerical comparison was LANCELOT (=-=[30]-=-) with second derivatives, a maximum of 10000 iterations and remaining default options. In both methods we declared Convergence when the current iterate (x,λ) satisfies �h(x)�∞ ≤ ε and �P(x − ∇L(x,λ))... |

63 |
CONOPT: A GRG code for large sparse dynamic nonlinear optimization problems.
- Drud
- 1985
(Show Context)
Citation Context ... : IR n → IR m are smooth functions and Ω ⊂ IR n is a (generally simple) closed and convex set. Inexact restoration (IR) methods (see [1, 2, 3]) are modern versions of the classical feasible methods (=-=[4, 5, 6, 7, 8, 9, 10, 11, 12]-=-) for nonlinear programming. The main iteration of an IR algorithm consists of two phases: in the restoration phase, infeasibility is reduced and in the optimality phase a Lagrangian function is appro... |

62 | Large-scale active-set box-constrained optimization method with spectral projected gradients
- Birgin, Mart'inez
- 2002
(Show Context)
Citation Context ...ends to be inactive. In this case, we usually have that �P[xk+1 − ∇L(xk+1,λk) − ∇h(yk)(λk+1 − λk)] − xk+1� ≤ max{ε, η�G(yk,λk)�}. (44) In our implementation we used GENCAN, an algorithm introduced in =-=[23]-=- for solving (38) and ALGENCAN, a straightforward Augmented Lagrangian algorithm based on GENCAN for solving (40). Very likely, these are not the best choices from the point of view of efficiency, but... |

39 |
Sequential gradient-restoration algorithm for optimal control problems.
- Miele, Pritchard, et al.
- 1970
(Show Context)
Citation Context ... : IR n → IR m are smooth functions and Ω ⊂ IR n is a (generally simple) closed and convex set. Inexact restoration (IR) methods (see [1, 2, 3]) are modern versions of the classical feasible methods (=-=[4, 5, 6, 7, 8, 9, 10, 11, 12]-=-) for nonlinear programming. The main iteration of an IR algorithm consists of two phases: in the restoration phase, infeasibility is reduced and in the optimality phase a Lagrangian function is appro... |

34 |
A globally convergent filter method for nonlinear programming,”
- Gonzaga, Karas, et al.
- 2003
(Show Context)
Citation Context ...mming, the more consistent strategy for globalizing algorithms without the use of merit functions seems to be the filter technique introduced by Fletcher and Leyffer ([16]). Gonzaga, Karas and Vanti (=-=[17]-=-) applied the filter strategy to an algorithm that resembles Inexact Restoration. Previous attempts of eliminating merit functions as globalization tools for semifeasible methods go back to [18]. It i... |

33 |
Augmented Lagrangian with adaptive precision control for quadratic programming with simple bounds and equality constraints”,
- Dostal, Friedlander, et al.
- 2003
(Show Context)
Citation Context ...they serve for the purpose the main questions that we want to be answered by the numerical experiments, which are related with robustness. Nevertheless, we would like to mention that in recent works (=-=[24, 25, 26]-=-) excellent numerical behavior of Augmented Lagrangian algorithms applied to linearly constrained minimization has been reported. The way in which the requirement (39) was implemented was as follows: ... |

31 | Inexact restoration method with Lagrangian tangent decrease and new merit function for nonlinear programming.
- Martınez
- 2001
(Show Context)
Citation Context ...problem: min f(x) s.t. h(x) = 0, x ∈ Ω, (1) where f : IR n → IR, h : IR n → IR m are smooth functions and Ω ⊂ IR n is a (generally simple) closed and convex set. Inexact restoration (IR) methods (see =-=[1, 2, 3]-=-) are modern versions of the classical feasible methods ([4, 5, 6, 7, 8, 9, 10, 11, 12]) for nonlinear programming. The main iteration of an IR algorithm consists of two phases: in the restoration pha... |

31 | Inexact restoration algorithms for constrained optimization. - Martınez, Pilotta - 2000 |

24 | Nonlinear programming algorithms using trust regions and augmented Lagrangians with nonmonotone penalty parameters.
- Gomes, Maciel, et al.
- 1999
(Show Context)
Citation Context ...ics, Institute of Mathematics, Statistics and Scientific Computing, University of Campinas, Campinas, SP, Brazil. 1spoint, as in several sequential quadratic programming algorithms (see, for example, =-=[13]-=-) but in the inexactly restored point. The merit function used in [1] is a sharp Lagrangian as defined in [14], Example 11.58. Merit functions are useful tools in all branches of optimization. However... |

24 |
Cosnard, Numerical solution of nonlinear equations
- More, Y
- 1979
(Show Context)
Citation Context ...itioners that, in the process of solving nonlinear systems, locally convergent methods can be improved by the simple device of maintaining the distance between consecutive iterates under control. See =-=[22]-=-. This is the role of the constraint �z − yk�∞ ≤ ˜ K3 max{1, �yk�∞} (43) in (40). In a neighborhood of a solution, the step-control constraint tends to be inactive. In this case, we usually have that ... |

22 |
Some efficient algorithms for solving systems of nonlinear equations
- Brent
- 1973
(Show Context)
Citation Context ...ocal and global methods in nonlinear optimization is, sometimes, surprising. As far as in 1979, Moré and Cosnard ([22]) published a numerical study where Brent’s method for solving nonlinear systems (=-=[27, 28]-=-) appeared to be better than globally convergent nonlinear solvers when a suitable control for the steplength was used. The analogy between the local Inexact Restoration method and the generalized Bro... |

17 |
Generalization of the Wolfe Reduced-Gradient Method to the Case of Nonlinear Constraints,
- Abadie, Carpentier
- 1968
(Show Context)
Citation Context |

15 | Inexact restoration methods for nonlinear programming: advances and perspectives.
- Martınez, Pilotta
- 2005
(Show Context)
Citation Context ...problem: min f(x) s.t. h(x) = 0, x ∈ Ω, (1) where f : IR n → IR, h : IR n → IR m are smooth functions and Ω ⊂ IR n is a (generally simple) closed and convex set. Inexact restoration (IR) methods (see =-=[1, 2, 3]-=-) are modern versions of the classical feasible methods ([4, 5, 6, 7, 8, 9, 10, 11, 12]) for nonlinear programming. The main iteration of an IR algorithm consists of two phases: in the restoration pha... |

15 |
Reduced Gradient Methods, in Nonlinear Optimization
- Lasdon
- 1981
(Show Context)
Citation Context |

15 |
Modifications and Extensions of the Conjugate-Gradient Restoration Algorithm for Mathematical Programming Problems,
- Miele, Levy, et al.
- 1971
(Show Context)
Citation Context |

13 |
Solution of contact problems by FETI domain decomposition with natural coarse space projections
- Dostál, Gomes, et al.
(Show Context)
Citation Context ...they serve for the purpose the main questions that we want to be answered by the numerical experiments, which are related with robustness. Nevertheless, we would like to mention that in recent works (=-=[24, 25, 26]-=-) excellent numerical behavior of Augmented Lagrangian algorithms applied to linearly constrained minimization has been reported. The way in which the requirement (39) was implemented was as follows: ... |

11 |
A Gradient Projection Algorithm for Nonlinear Constraints, Numerical Methods for Nonlinear Optimization, Edited by F.A.
- ROSEN, KREUSER
- 1972
(Show Context)
Citation Context |

8 |
Nonlinear Programming Algorithms with Dynamic Definition of Near-Feasibility: Theory and Implementations
- BIELSCHOWSKY
- 1996
(Show Context)
Citation Context ...anti ([17]) applied the filter strategy to an algorithm that resembles Inexact Restoration. Previous attempts of eliminating merit functions as globalization tools for semifeasible methods go back to =-=[18]-=-. It is not difficult to modify poor algorithms in order to obtain theoretically globally convergent methods. This can be made using both monotone or nonmonotone strategies. In general, the modificati... |

8 |
Generalization of the methods of Brent and Brown for solving nonlinear simultaneous equations
- Mart́ınez
- 1979
(Show Context)
Citation Context ...case the critical pair ¯x, ¯ λ is a solution of the nonlinear system h(x) = 0 and ∇f(x) + ∇h(x)λ = 0. If the Jacobian of this nonlinear system is nonsingular at (¯x, ¯ λ), Brent’s generalized method (=-=[20, 21]-=-) defines an admissible iteration for constants that only depend on (¯x, ¯ λ). The basic properties of this method guarantee that the iteration is well defined in a neighborhood of the critical pair a... |

7 |
Solving nonlinear simultaneous equations with a generalization of brent’s method
- Martínez
- 1980
(Show Context)
Citation Context ...case the critical pair ¯x, ¯ λ is a solution of the nonlinear system h(x) = 0 and ∇f(x) + ∇h(x)λ = 0. If the Jacobian of this nonlinear system is nonsingular at (¯x, ¯ λ), Brent’s generalized method (=-=[20, 21]-=-) defines an admissible iteration for constants that only depend on (¯x, ¯ λ). The basic properties of this method guarantee that the iteration is well defined in a neighborhood of the critical pair a... |

5 |
A Class of Nonmonotone Stability
- Grippo, Lampariello, et al.
- 1991
(Show Context)
Citation Context ...l algorithm climbs over merit-function valleys in a very efficient way. In unconstrained optimization, nonmonotone strategies, where decrease of the merit function is not required at every iteration (=-=[15]-=-), became a popular tool in the last decade. In nonlinear programming, the more consistent strategy for globalizing algorithms without the use of merit functions seems to be the filter technique intro... |

5 |
ALGORITHM 554: BRENTM, A FORTRAN subroutine for the numerical solution of systems of nonlinear equations [C5].
- More, Cosnard
- 1980
(Show Context)
Citation Context ...ocal and global methods in nonlinear optimization is, sometimes, surprising. As far as in 1979, Moré and Cosnard ([22]) published a numerical study where Brent’s method for solving nonlinear systems (=-=[27, 28]-=-) appeared to be better than globally convergent nonlinear solvers when a suitable control for the steplength was used. The analogy between the local Inexact Restoration method and the generalized Bro... |