• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

Equality constraints, riemannian manifolds and direct search methods,. Los Alamos (2006)

by D W Dreisigmeyer
Add To MetaCart

Tools

Sorted by:
Results 1 - 4 of 4

DIRECT SEARCH METHODS OVER LIPSCHITZ MANIFOLDS ∗

by David W. Dreisigmeyer , 2007
"... We extend direct search methods to optimization problems that include equality constraints given by Lipschitz functions. The equality constraints are assumed to implicitly define a Lipschitz manifold. Numerically implementing the inverse (implicit) function theorem allows us to define a new problem ..."
Abstract - Cited by 1 (1 self) - Add to MetaCart
We extend direct search methods to optimization problems that include equality constraints given by Lipschitz functions. The equality constraints are assumed to implicitly define a Lipschitz manifold. Numerically implementing the inverse (implicit) function theorem allows us to define a new problem on the tangent spaces of the manifold. We can then use a direct search method on the tangent spaces to solve the new optimization problem without any equality constraints. Solving this related problem implicitly solves the original optimization problem. Our main example utilizes the LTMADS algorithm for the direct search method. However, other direct search methods can be employed. Convergence results trivially carry over to our new procedure under mild assumptions.
(Show Context)

Citation Context

...e always exceeded the maximum allowed function evaluations. The results are shown in Figure 1. The algorithm always converged to (nearly) the correct solution, even when n = 20 or 50. 5 Discussion In =-=[14, 15]-=- it was assumed that the manifolds were C 2 and, that Jacobian and Hessian information was available for the equality constraints. Then we could use the Exp x mapping of TxM into M to pullback the obj...

A Derivative-Free CoMirror Algorithm for Convex Optimization

by Heinz H. Bauschke, et al. , 2014
"... We consider the minimization of a nonsmooth convex function over a compact convex set subject to a nonsmooth convex constraint. We work in the setting of derivative-free optimization (DFO), assuming that the objective and constraint functions are available through a black-box that provides function ..."
Abstract - Add to MetaCart
We consider the minimization of a nonsmooth convex function over a compact convex set subject to a nonsmooth convex constraint. We work in the setting of derivative-free optimization (DFO), assuming that the objective and constraint functions are available through a black-box that provides function values for lower-C2 representation of the functions. Our approach is based on a DFO adaptation of the -comirror algorithm [6]. Algorithmic convergence hinges on the ability to accurately approximate subgradients of lower-C2 functions, which we prove is possible through linear interpolation. We show that, if the sampling radii for linear interpolation are properly selected, then the new algorithm has the same convergence rate as the original gradient-based algorithm. This provides a novel global rate-of-convergence result for nonsmooth convex DFO with nonsmooth convex constraints. We conclude with numerical testing that demonstrates the practical feasibility of the algorithm and some directions for further research.

A SIMPLICIAL CONTINUATION DIRECT SEARCH METHOD

by David W. Dreisigmeyer , 2007
"... A direct search method for the class of problems considered by Lewis and Torczon [SIAM J. Optim., 12 (2002), pp. 1075-1089] is developed. Instead of using an augmented Lagrangian method, a simplicial approximation method to the feasible set is implicitly employed. This allows the points our algorith ..."
Abstract - Add to MetaCart
A direct search method for the class of problems considered by Lewis and Torczon [SIAM J. Optim., 12 (2002), pp. 1075-1089] is developed. Instead of using an augmented Lagrangian method, a simplicial approximation method to the feasible set is implicitly employed. This allows the points our algorithm considers to conveniently remain within an a priori specified distance of the feasible set. In the limit, a positive spanning set is constructed in the tangent plane to the feasible set of every cluster point. In this way, we can guarantee that every cluster point is a stationary point for the objective function restricted to the feasible set.
(Show Context)

Citation Context

.... AS3: ∇g(x) is full rank in a neighborhood of M. In the language of differential geometry, we assume that f(x) is a C 2 function defined on the C 2 regular level set (1.1b) that implicitly defines M =-=[6, 10]-=-. The paper is organized as follows. In section 2 we describe the necessary components of the simplicial continuation algorithm in [3]. Some additional assumptions are spelled out in section 3. Given ...

A DIRECT SEARCH METHOD FOR WORST CASE ANALYSIS AND YIELD OPTIMIZATION OF INTEGRATED CIRCUITS

by Gregor Cijan , 2009
"... Yield maximization is an important aspect in the design of integrated circuits. A prerequisite for its automation is a reliable and fast worst performance analysis which results in corners that can be used in the process of circuit optimization. We formulate the constrained optimization problem for ..."
Abstract - Add to MetaCart
Yield maximization is an important aspect in the design of integrated circuits. A prerequisite for its automation is a reliable and fast worst performance analysis which results in corners that can be used in the process of circuit optimization. We formulate the constrained optimization problem for finding the worst performance of an integrated circuit and develop a direct search method for solving it. The algorithm uses radial steps and rotations for enforcing the inequality constraint. We demonstrate the performance of the proposed algorithm on real world design examples of integrated circuits. The results indicate that the algorithm solves the worst performance problem in an efficient manner. The proposed algorithm was also successfully used in the process of yield maximization, resulting in a 99.65 % yield.
(Show Context)

Citation Context

...; n1 +1≤ i ≤ n1 + n2 ∧ xi >xHi . We denote the composite of functions Ω1 and Ω2 by Ω(x) =Ω2(Ω1(x)) . (21) Function ⎧ ⎪⎨ rotate1[x, ei, ∆i]; 1≤ i ≤ n1 , PT (x, ∆,i)= radial1[x, ∆i,βmin]; ⎪⎩ i = n1 +1, =-=(22)-=- x +∆ibi−n1−1 ; n1 +2≤ i ≤ n1 + n2 +1, produces a trial step across subspaces S1 and S2. The first type of trial steps in subspace S1 are rotations of PS1(x) toward some basis vector ei. Ifei is colli...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University