Results 1  10
of
281
Interiorpoint methods for nonconvex nonlinear programming: Filter methods and merit functions
 Computational Optimization and Applications
, 2002
"... Abstract. In this paper, we present global and local convergence results for an interiorpoint method for nonlinear programming and analyze the computational performance of its implementation. The algorithm uses an ℓ1 penalty approach to relax all constraints, to provide regularization, and to bound ..."
Abstract

Cited by 119 (8 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper, we present global and local convergence results for an interiorpoint method for nonlinear programming and analyze the computational performance of its implementation. The algorithm uses an ℓ1 penalty approach to relax all constraints, to provide regularization, and to bound the Lagrange multipliers. The penalty problems are solved using a simplified version of Chen and Goldfarb’s strictly feasible interiorpoint method [12]. The global convergence of the algorithm is proved under mild assumptions, and local analysis shows that it converges Qquadratically for a large class of problems. The proposed approach is the first to simultaneously have all of the following properties while solving a general nonconvex nonlinear programming problem: (1) the convergence analysis does not assume boundedness of dual iterates, (2) local convergence does not require the Linear Independence Constraint Qualification, (3) the solution of the penalty problem is shown to locally converge to optima that may not satisfy the KarushKuhnTucker conditions, and (4) the algorithm is applicable to mathematical programs with equilibrium constraints. Numerical testing on a set of general nonlinear programming problems, including degenerate problems and infeasible problems, confirm the theoretical results. We also provide comparisons to a highlyefficient nonlinear solver and thoroughly analyze the effects of enforcing theoretical convergence guarantees on the computational performance of the algorithm. 1.
Efficient solving of large nonlinear arithmetic constraint systems with complex boolean structure
 Journal on Satisfiability, Boolean Modeling and Computation
, 2007
"... In order to facilitate automated reasoning about large Boolean combinations of nonlinear arithmetic constraints involving transcendental functions, we provide a tight integration of recent SAT solving techniques with intervalbased arithmetic constraint solving. Our approach deviates substantially f ..."
Abstract

Cited by 85 (11 self)
 Add to MetaCart
(Show Context)
In order to facilitate automated reasoning about large Boolean combinations of nonlinear arithmetic constraints involving transcendental functions, we provide a tight integration of recent SAT solving techniques with intervalbased arithmetic constraint solving. Our approach deviates substantially from lazy theorem proving approaches in that it directly controls arithmetic constraint propagation from the SAT solver rather than delegating arithmetic decisions to a subordinate solver. Through this tight integration, all the algorithmic enhancements that were instrumental to the enormous performance gains recently achieved in propositional SAT solving carry over smoothly to the rich domain of nonlinear arithmetic constraints. As a consequence, our approach is able to handle large constraint systems with extremely complex Boolean structure, involving Boolean combinations of multiple thousand arithmetic constraints over some thousands of variables.
On Augmented Lagrangian methods with general lowerlevel constraints
, 2005
"... Augmented Lagrangian methods with general lowerlevel constraints are considered in the present research. These methods are useful when efficient algorithms exist for solving subproblems where the constraints are only of the lowerlevel type. Two methods of this class are introduced and analyzed. In ..."
Abstract

Cited by 80 (7 self)
 Add to MetaCart
(Show Context)
Augmented Lagrangian methods with general lowerlevel constraints are considered in the present research. These methods are useful when efficient algorithms exist for solving subproblems where the constraints are only of the lowerlevel type. Two methods of this class are introduced and analyzed. Inexact resolution of the lowerlevel constrained subproblems is considered. Global convergence is proved using the Constant Positive Linear Dependence constraint qualification. Conditions for boundedness of the penalty parameters are discussed. The reliability of the approach is tested by means of an exhaustive comparison against Lancelot. All the problems of the Cute collection are used in this comparison. Moreover, the resolution of location problems in which many constraints of the lowerlevel set are nonlinear is addressed, employing the Spectral Projected Gradient method for solving the subproblems. Problems of this type with more than 3 × 10 6 variables and 14 × 10 6 constraints are solved in this way, using moderate computer time.
Modeling and Optimization with Optimica and JModelica.org—Languages and Tools for Solving LargeScale Dynamic Optimization Problem
 Computers and Chemical Engineering
"... The Modelica language, targeted at modeling of complex physical systems, has gained increased attention during the last decade. Modelica is about to establish itself as a de facto standard in the modeling community with strong support both within academia and industry. While there are several tools, ..."
Abstract

Cited by 26 (3 self)
 Add to MetaCart
(Show Context)
The Modelica language, targeted at modeling of complex physical systems, has gained increased attention during the last decade. Modelica is about to establish itself as a de facto standard in the modeling community with strong support both within academia and industry. While there are several tools, both commercial and free, supporting simulation of Modelica models few efforts have been made in the area of dynamic optimization of Modelica models. In this paper, an extension to the Modelica language, entitled Optimica, is reported. Optimica enables compact and intuitive formulations of optimization problems, static and dynamic, based on Modelica models. The paper also reports a novel Modelicabased open source project, JModelica.org, specifically targeted at dynamic optimization. JModelica.org supports the Optimica extension and offers an open platform based on established technologies, including Python, C, Java and XML. Examples are pro
OptimicaAn Extension of Modelica Supporting Dynamic Optimization [Elektronisk resurs
, 2008
"... In this paper, an extension of Modelica, entitled Optimica, is presented. Optimica extends Modelica with language constructs that enable formulation of dynamic optimization problems based on Modelica models. There are several important design problems that can be addressed by means of dynamic opt ..."
Abstract

Cited by 25 (3 self)
 Add to MetaCart
(Show Context)
In this paper, an extension of Modelica, entitled Optimica, is presented. Optimica extends Modelica with language constructs that enable formulation of dynamic optimization problems based on Modelica models. There are several important design problems that can be addressed by means of dynamic optimization, in a wide range of domains. Examples include, minimumtime problems, parameter estimation problems, and online optimization control strategies. The Optimica extension is supported by a prototype compiler, the Optimica compiler, which has been used successfully in case studies.
Infinite Kernel Learning
, 2008
"... In this paper we consider the problem of automatically learning the kernel from general kernel classes. Specifically we build upon the Multiple Kernel Learning (MKL) framework and in particular on the work of (Argyriou, Hauser, Micchelli, & Pontil, 2006). We will formulate a SemiInfinite Prog ..."
Abstract

Cited by 25 (0 self)
 Add to MetaCart
In this paper we consider the problem of automatically learning the kernel from general kernel classes. Specifically we build upon the Multiple Kernel Learning (MKL) framework and in particular on the work of (Argyriou, Hauser, Micchelli, & Pontil, 2006). We will formulate a SemiInfinite Program (SIP) to solve the problem and devise a new algorithm to solve it (Infinite Kernel Learning, IKL). The IKL algorithm is applicable to both the finite and infinite case and we find it to be faster and more stable than SimpleMKL (Rakotomamonjy, Bach, Canu, & Grandvalet, 2007) for cases of many kernels. In the second part we present the first large scale comparison of SVMs to MKL on a variety of benchmark datasets, also comparing IKL. The results show two things: a) for many datasets there is no benefit in linearly combining kernels with MKL/IKL instead of the SVM classifier, thus the flexibility of using more than one kernel seems to be of no use, b) on some datasets IKL yields impressive increases in accuracy over SVM/MKL due to the possibility of using a largely increased kernel set. In those cases, IKL remains practical, whereas both crossvalidation or standard MKL is infeasible.
Phase–field relaxation of topology optimization with local stress constraints
 Local Stress Constraints, SFBReport 0435 (SFB F013, University Linz, 2004), and submitted
, 2005
"... We introduce a new relaxation scheme for structural topology optimization problems with local stress constraints based on a phasefield method. The starting point of the relaxation is a reformulation of the material problem involving linear and 0–1 constraints only. The 0–1 constraints are then rela ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
(Show Context)
We introduce a new relaxation scheme for structural topology optimization problems with local stress constraints based on a phasefield method. The starting point of the relaxation is a reformulation of the material problem involving linear and 0–1 constraints only. The 0–1 constraints are then relaxed and approximated by a CahnHilliard type penalty in the objective functional, which yields convergence of minimizers to 0–1 designs as the penalty parameter decreases to zero. A major advantage of this kind of relaxation opposed to standard approaches is a uniform constraint qualification that is satisfied for any positive value of the penalization parameter. The relaxation scheme yields a largescale optimization problem with a high number of linear inequality constraints. We discretize the problem by finite elements and solve the arising finitedimensional programming problems by a primaldual interior point method. Numerical experiments for problems with stress constraints based on different criteria indicate the success and robustness of the new approach.
Optimal topological simplification of discrete functions on surfaces
 Discrete & Computational Geometry
"... We solve the problem of minimizing the number of critical points among all functions on a surface within a prescribed distance δ from a given input function. The result is achieved by establishing a connection between discrete Morse theory and persistent homology. Our method completely removes homol ..."
Abstract

Cited by 19 (6 self)
 Add to MetaCart
(Show Context)
We solve the problem of minimizing the number of critical points among all functions on a surface within a prescribed distance δ from a given input function. The result is achieved by establishing a connection between discrete Morse theory and persistent homology. Our method completely removes homological noise with persistence less than 2δ, constructively proving the tightness of a lower bound on the number of critical points given by the stability theorem of persistent homology in dimension two for any input function. We also show that an optimal solution can be computed in linear time after persistence pairs have been computed. 1
Interiorpoint methods for optimization
, 2008
"... This article describes the current state of the art of interiorpoint methods (IPMs) for convex, conic, and general nonlinear optimization. We discuss the theory, outline the algorithms, and comment on the applicability of this class of methods, which have revolutionized the field over the last twen ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
This article describes the current state of the art of interiorpoint methods (IPMs) for convex, conic, and general nonlinear optimization. We discuss the theory, outline the algorithms, and comment on the applicability of this class of methods, which have revolutionized the field over the last twenty years.
Design and Implementation of Time Efficient Trajectories for Underwater vehicles
 Journal of Ocean Engineering
"... This paper discusses control strategies adapted for practical implementation and efficient motion of underwater vehicles. These trajectories are piecewise constant thrust arcs with few actuator switchings. We provide the numerical algorithm which computes the time efficient trajectories parameterize ..."
Abstract

Cited by 16 (12 self)
 Add to MetaCart
(Show Context)
This paper discusses control strategies adapted for practical implementation and efficient motion of underwater vehicles. These trajectories are piecewise constant thrust arcs with few actuator switchings. We provide the numerical algorithm which computes the time efficient trajectories parameterized by the switching times. We discuss both the theoretical analysis and experimental implementation results.