### Citations

481 |
Efficient global optimization of expensive black-box functions
- Jones, Schonlau, et al.
- 1998
(Show Context)
Citation Context ...hniques, like pattern search methods [11,12], that add more information from the problem itself to improve initial starts for the above method. Also Kriging techniques using the DACE regression model =-=[2,10,15]-=- can be used in this phase, which additionally offer a more global optimization aspect, when also applied later in the process. The regression model includes effects due to correlation. Based on value... |

457 |
Direct search solution of numerical and statistical problems
- Hooke, Jeeves
- 1961
(Show Context)
Citation Context ...that the projected gradientP(r x ALAG (x ? ; ? ;)) = 0, where [P(r x F (x))℄ j = 8 < : [r x F (x)℄ j if a j < x j < b j min[ [r x F (x)℄ j ; 0 ℄ if a j = x j max[ [r x F (x)℄ j ; 0 ℄ if b j = x j =-=(8)-=- The reduction in dimension has been paid back by loss of smoothness due to the ‘max’ function. However, any algorithm can check forsi = i =(2 i ) and in general this does not occur at a KKT point ... |

231 | Optimization by direct search: New perspectives on some classical and modern methods
- Kolda, Lewis, et al.
(Show Context)
Citation Context ...oning problem happens when dealing with logarithmic barrier functions that provide an impassible barrier at the boundary of the feasible region [21]. Recently pattern search methods have been studied =-=[11,12]-=-. The most well-known method of this class is the method of Hooke-Jeeves [8]. The Nelder-Mead method November 30, 2006 10:31 WSPC - Proceedings Trim Size: 9in x 6in SIMAI˙2006˙Proc˙JtMALAEG 3 does not... |

87 | Asynchronous parallel pattern search for nonlinear optimization
- Hough, Kolda, et al.
(Show Context)
Citation Context ...oning problem happens when dealing with logarithmic barrier functions that provide an impassible barrier at the boundary of the feasible region [21]. Recently pattern search methods have been studied =-=[11,12]-=-. The most well-known method of this class is the method of Hooke-Jeeves [8]. The Nelder-Mead method November 30, 2006 10:31 WSPC - Proceedings Trim Size: 9in x 6in SIMAI˙2006˙Proc˙JtMALAEG 3 does not... |

86 | DACE: A MATLAB Kriging ToolBox,”
- Lophaven, Nielsen, et al.
- 2002
(Show Context)
Citation Context ...hniques, like pattern search methods [11,12], that add more information from the problem itself to improve initial starts for the above method. Also Kriging techniques using the DACE regression model =-=[2,10,15]-=- can be used in this phase, which additionally offer a more global optimization aspect, when also applied later in the process. The regression model includes effects due to correlation. Based on value... |

75 | Flexibility and Efficiency Enhancements for Constrained Global Design Optimization with Kriging Approximations,"
- Sasena
- 2002
(Show Context)
Citation Context ...)); (x(j))℄ = exp[ P k k jx (i) k x (j) k j p k ℄, an exact interpolant ^ Y (x) can be derived, together with an estimation for the variance s2(x) [clearly s 2 (x (i) ) = 0]. The EGO algorithms =-=[10,20]-=- maximize the expected improvement for Y ( N( ^Y ; s2). However, experience shows that only a small reduction in the number of overall function evaluations was obtained when including this approach ... |

71 | Trust region augmented Lagrangian methods for sequential response surface approximation and optimization.
- Rodriguez, Renaud, et al.
- 1997
(Show Context)
Citation Context ...her weak conditions under which the methods do converge, f.i. for the unconstrained problem f 2 C1. By introducing slack variables s i 0, the augmented Lagrangian penalty function can be written as =-=[18]-=-, ALAG;s (x;;; s) = f(x) m X i=1 i [si (x) | {z } L(x;) +s i ℄ + m X i=1 i [si (x) + s i ℄ 2 ; (3) = f(x) + m X i=1 f i [(si (x) i 2 i ) + s i ℄ 2 2 i 4 i g (4) in which L is the s... |

69 | A Globally Convergent Augmented Lagrangian Pattern Search Algorithm for Optimization With General Constraints and Simple Bounds
- Lewis, Torczon
(Show Context)
Citation Context ... 2 i 4 i g (4) in which L is the standard Lagrangian. The parameters i and i are Lagrange multipliers and penalty factors, respectively. Pattern search methods for (3) converge when f;si 2 C 2 =-=[13]-=-. Minimization (4) over the slack variables s i at optimal value s? i = max h si (x) + i 2 i ; 0 i yields the simplified merit function that is used in [7], ALAG (x;;) = f(x) m X i=1 is(si ... |

63 |
Discrete optimization of structures using genetic algorithms,
- Rajeev, CS
- 1992
(Show Context)
Citation Context ...e most well known probabilistic methods November 30, 2006 10:31 WSPC - Proceedings Trim Size: 9in x 6in SIMAI˙2006˙Proc˙JtMALAEG 6 for both continuous and discrete optimization are Genetic Algorithms =-=[17]-=- and Simulated Annealing [22]. One major advantage of probabilistic methods is their ability to directly deal with discrete design variables. However, these methods are extremely expensive (because of... |

50 | Asymptotic probability extraction for non-normal distributions of circuit performance
- Li, Le, et al.
- 2004
(Show Context)
Citation Context ...ly the number of standard deviations from the mean value. With fundamentally increasing variation magnitudes, non-normality of performance distributions must be taken into account. The APEX algorithm =-=[14]-=- addresses this point. A response surface model of f is built that is quadratic in the process variations. The method relies (yet) on the explicit calculation of the stochastic moments of f and on AWE... |

31 | A grid algorithm for bound constrained optimization of noisy functions’,
- Elster, Neumaier
- 1995
(Show Context)
Citation Context ...ALAG (x;;))jj and on jj (si (x); i 2 i )jj. In our case [7] the whole approach is applied on a grid (i.e. all x(k) are projected to the nearest grid point), that subsequently is refined, like in =-=[5]-=-. This prevents the algorithm from going too fast to a small scale (and then gets stuck in a local November 30, 2006 10:31 WSPC - Proceedings Trim Size: 9in x 6in SIMAI˙2006˙Proc˙JtMALAEG 5 minimum). ... |

22 |
Kriging interpolation in simulation: a survey.
- Kleijnen, Beers
- 2004
(Show Context)
Citation Context ...hniques, like pattern search methods [11,12], that add more information from the problem itself to improve initial starts for the above method. Also Kriging techniques using the DACE regression model =-=[2,10,15]-=- can be used in this phase, which additionally offer a more global optimization aspect, when also applied later in the process. The regression model includes effects due to correlation. Based on value... |

18 |
Methods for optimization of nonlinear problems with discrete variables: a review,”
- Arora, Huang, et al.
- 1994
(Show Context)
Citation Context ...here the term mixed-discrete indicates that both discrete and continuous design variables are present. Current discrete optimization methods can be classified as either deterministic or probabilistic =-=[1]-=-. Probabilistic methods have been applied to solve engineering optimization problems for a long time. The most well known probabilistic methods November 30, 2006 10:31 WSPC - Proceedings Trim Size: 9i... |

7 |
Optimum design of trusses with discrete sizing and shape variables. Structural Optimization
- Salajegheh, Vanderplaats
- 1993
(Show Context)
Citation Context ...uations), and thus impractical for the optimization of electronic circuits. Various approaches have been developed to apply deterministic methods for discrete and mixed-discrete optimization problems =-=[6,19]-=-. The simplest and least expensive method for obtaining a discrete solution is by rounding the continuous solution to the value of the closest discrete values. However, this rounding process can easil... |

3 |
If Anything, Is New in Optimization
- Wright, What
- 2000
(Show Context)
Citation Context ... lim !1 x = x ? , which means that the minimization of (2) becomes more and more illconditioned. Apart from that, the Nelder-Mead algorithm, which is simple to program, is not that easy to analyse =-=[21]-=-. A similar conditioning problem happens when dealing with logarithmic barrier functions that provide an impassible barrier at the boundary of the feasible region [21]. Recently pattern search methods... |

2 |
Snyman: A pseudo-discrete rounding method for structural optimization
- Groenwold, Stander, et al.
- 1996
(Show Context)
Citation Context ...uations), and thus impractical for the optimization of electronic circuits. Various approaches have been developed to apply deterministic methods for discrete and mixed-discrete optimization problems =-=[6,19]-=-. The simplest and least expensive method for obtaining a discrete solution is by rounding the continuous solution to the value of the closest discrete values. However, this rounding process can easil... |

1 |
Wu: JiffyTune: cicuit optimization using time-domain sensitivities
- Conn, Coulman, et al.
- 1998
(Show Context)
Citation Context ...et support the calculation of sensitivities). When the number of parameters increases adjoint sensitivity methods become of interest. For transient integration of linear circuits this is described in =-=[3]-=-. Recently, in [9] a more general procedure is described that also applies to nonlinear circuits and retains efficiency by exploiting (nonlinear) techniques from Model Order Reduction. In this paper w... |

1 | Nonlinear programming: introduction, unconstrained and constrained optimization
- Pillo, Palagi
- 2002
(Show Context)
Citation Context ... The reduction in dimension has been paid back by loss of smoothness due to the ‘max’ function. However, any algorithm can check forsi = i =(2 i ) and in general this does not occur at a KKT point =-=[4]-=-. Thus, when locally enough started, November 30, 2006 10:31 WSPC - Proceedings Trim Size: 9in x 6in SIMAI˙2006˙Proc˙JtMALAEG 4 pattern search methods will also converge for (6). A conjecture is that ... |

1 | Augmented Lagrangian algorithm for optimizing analog circuit design
- Heijmen, Lin, et al.
- 2003
(Show Context)
Citation Context ... and the constraining functions i (x) are obtained from circuit simulation. The performance and stability of the optimization algorithm is affected by the scaling of the OVs, of f(x) and of the i (x) =-=[7]-=-. The Nelder-Mead algorithm can be used to minimize (x) = f(x) + m X i=1 <si (x) > 2 (2) where < >= max(; 0). If the minimum is taken at x , one has lim !1 x = x ? , which means that the mi... |

1 |
Mattheij: Adjoint Transient Sensitivity Analysis
- Ilievski, Xu, et al.
- 2006
(Show Context)
Citation Context ...culation of sensitivities). When the number of parameters increases adjoint sensitivity methods become of interest. For transient integration of linear circuits this is described in [3]. Recently, in =-=[9]-=- a more general procedure is described that also applies to nonlinear circuits and retains efficiency by exploiting (nonlinear) techniques from Model Order Reduction. In this paper we will describe ou... |

1 |
Antreich: On parametric test design for analog integrated circuits considering error in measurement and stimulus
- Pronath, Graeb, et al.
(Show Context)
Citation Context ...g, wheresindicates the acceptance region for the constraint function values, the worst-case point swsi can be defined as s wsi = argmax s2A i (s) = argmin s2A i 2 (s) In the worst-case analysis in =-=[16]-=- first linearization with respect to is done, and a WC is derived. Next, this value is used in the linearization with respect to x, and a x WC is determined. This process is iteratively repeated i... |

1 |
Gi Hwa: Stochastic search techniques for global optimization of structures
- Soon, K
- 1992
(Show Context)
Citation Context ...ic methods November 30, 2006 10:31 WSPC - Proceedings Trim Size: 9in x 6in SIMAI˙2006˙Proc˙JtMALAEG 6 for both continuous and discrete optimization are Genetic Algorithms [17] and Simulated Annealing =-=[22]-=-. One major advantage of probabilistic methods is their ability to directly deal with discrete design variables. However, these methods are extremely expensive (because of a large number of function e... |