Results 1  10
of
479
On the limited memory BFGS method for large scale optimization
 MATHEMATICAL PROGRAMMING
, 1989
"... ..."
A Limited Memory Algorithm for Bound Constrained Optimization
 SIAM JOURNAL ON SCIENTIFIC COMPUTING
, 1994
"... An algorithm for solving large nonlinear optimization problems with simple bounds is described. It is based ..."
Abstract

Cited by 572 (9 self)
 Add to MetaCart
An algorithm for solving large nonlinear optimization problems with simple bounds is described. It is based
PETSc users manual
 ANL95/11  Revision 2.1.0, Argonne National Laboratory
, 2001
"... tract W31109Eng38. 2 This manual describes the use of PETSc for the numerical solution of partial differential equations and related problems on highperformance computers. The Portable, Extensible Toolkit for Scientific Computation (PETSc) is a suite of data structures and routines that provid ..."
Abstract

Cited by 282 (20 self)
 Add to MetaCart
tract W31109Eng38. 2 This manual describes the use of PETSc for the numerical solution of partial differential equations and related problems on highperformance computers. The Portable, Extensible Toolkit for Scientific Computation (PETSc) is a suite of data structures and routines that provide the building blocks for the implementation of largescale application codes on parallel (and serial) computers. PETSc uses the MPI standard for all messagepassing communication. PETSc includes an expanding suite of parallel linear, nonlinear equation solvers and time integrators that may be used in application codes written in Fortran, C, and C++. PETSc provides many of the mechanisms needed within parallel application codes, such as parallel matrix and vector assembly routines. The library is organized hierarchically, enabling users to employ the level of abstraction that is most appropriate for a particular problem. By using techniques of objectoriented programming, PETSc provides enormous flexibility for users. PETSc is a sophisticated set of software tools; as such, for some users it initially has a much steeper
The tradeoffs of large scale learning
 IN: ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 20
, 2008
"... This contribution develops a theoretical framework that takes into account the effect of approximate optimization on learning algorithms. The analysis shows distinct tradeoffs for the case of smallscale and largescale learning problems. Smallscale learning problems are subject to the usual approx ..."
Abstract

Cited by 270 (4 self)
 Add to MetaCart
(Show Context)
This contribution develops a theoretical framework that takes into account the effect of approximate optimization on learning algorithms. The analysis shows distinct tradeoffs for the case of smallscale and largescale learning problems. Smallscale learning problems are subject to the usual approximation–estimation tradeoff. Largescale learning problems are subject to a qualitatively different tradeoff involving the computational complexity of the underlying optimization algorithms in nontrivial ways.
On the Convergence of Pattern Search Algorithms
"... . We introduce an abstract definition of pattern search methods for solving nonlinear unconstrained optimization problems. Our definition unifies an important collection of optimization methods that neither computenor explicitly approximate derivatives. We exploit our characterization of pattern sea ..."
Abstract

Cited by 243 (13 self)
 Add to MetaCart
(Show Context)
. We introduce an abstract definition of pattern search methods for solving nonlinear unconstrained optimization problems. Our definition unifies an important collection of optimization methods that neither computenor explicitly approximate derivatives. We exploit our characterization of pattern search methods to establish a global convergence theory that does not enforce a notion of sufficient decrease. Our analysis is possible because the iterates of a pattern search method lie on a scaled, translated integer lattice. This allows us to relax the classical requirements on the acceptance of the step, at the expense of stronger conditions on the form of the step, and still guarantee global convergence. Key words. unconstrained optimization, convergence analysis, direct search methods, globalization strategies, alternating variable search, axial relaxation, local variation, coordinate search, evolutionary operation, pattern search, multidirectional search, downhill simplex search AMS(M...
Optimization by Direct Search: New Perspectives on Some Classical and Modern Methods
 SIAM REVIEW VOL. 45, NO. 3, PP. 385–482
, 2003
"... Direct search methods are best known as unconstrained optimization techniques that do not explicitly use derivatives. Direct search methods were formally proposed and widely applied in the 1960s but fell out of favor with the mathematical optimization community by the early 1970s because they lacked ..."
Abstract

Cited by 237 (15 self)
 Add to MetaCart
(Show Context)
Direct search methods are best known as unconstrained optimization techniques that do not explicitly use derivatives. Direct search methods were formally proposed and widely applied in the 1960s but fell out of favor with the mathematical optimization community by the early 1970s because they lacked coherent mathematical analysis. Nonetheless, users remained loyal to these methods, most of which were easy to program, some of which were reliable. In the past fifteen years, these methods have seen a revival due, in part, to the appearance of mathematical analysis, as well as to interest in parallel and distributed computing. This review begins by briefly summarizing the history of direct search methods and considering the special properties of problems for which they are well suited. Our focus then turns to a broad class of methods for which we provide a unifying framework that lends itself to a variety of convergence results. The underlying principles allow generalization to handle bound constraints and linear constraints. We also discuss extensions to problems with nonlinear constraints.
Sequential Quadratic Programming
, 1995
"... this paper we examine the underlying ideas of the SQP method and the theory that establishes it as a framework from which effective algorithms can ..."
Abstract

Cited by 166 (4 self)
 Add to MetaCart
this paper we examine the underlying ideas of the SQP method and the theory that establishes it as a framework from which effective algorithms can
Choosing the Forcing Terms in an Inexact Newton Method
 SIAM J. SCI. COMPUT
, 1994
"... An inexact Newton method is a generalization of Newton's method for solving F(x) = 0, F:/ /, in which, at the kth iteration, the step sk from the current approximate solution xk is required to satisfy a condition ]lF(x) + F'(x)s]l _< /]lF(xk)]l for a "forcing term" / [0,1). I ..."
Abstract

Cited by 161 (6 self)
 Add to MetaCart
(Show Context)
An inexact Newton method is a generalization of Newton's method for solving F(x) = 0, F:/ /, in which, at the kth iteration, the step sk from the current approximate solution xk is required to satisfy a condition ]lF(x) + F'(x)s]l _< /]lF(xk)]l for a "forcing term" / [0,1). In typical applications, the choice of the forcing terms is critical to the efficiency of the method and can affect robustness as well. Promising choices of the forcing terms arc given, their local convergence properties are analyzed, and their practical performance is shown on a representative set of test problems.
Representations Of QuasiNewton Matrices And Their Use In Limited Memory Methods
, 1996
"... We derive compact representations of BFGS and symmetric rankone matrices for optimization. These representations allow us to efficiently implement limited memory methods for large constrained optimization problems. In particular, we discuss how to compute projections of limited memory matrices onto ..."
Abstract

Cited by 160 (11 self)
 Add to MetaCart
We derive compact representations of BFGS and symmetric rankone matrices for optimization. These representations allow us to efficiently implement limited memory methods for large constrained optimization problems. In particular, we discuss how to compute projections of limited memory matrices onto subspaces. We also present a compact representation of the matrices generated by Broyden's update for solving systems of nonlinear equations.