Results 1 
4 of
4
www.elsevier.com/locate/cor Reverse logistics network design with stochastic lead times
"... This work is concerned with the efficient design of a reverse logistics network using an extended version of models currently found in the literature. Those traditional, basic models are formulated as mixed integer linear programs (MILPmodel) and determine which facilities to open that minimize the ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
(Show Context)
This work is concerned with the efficient design of a reverse logistics network using an extended version of models currently found in the literature. Those traditional, basic models are formulated as mixed integer linear programs (MILPmodel) and determine which facilities to open that minimize the investment, processing, transportation, disposal and penalty costs while supply, demand and capacity constraints are satisfied. However, we show that they can be improved when they are combined with a queueing model because it enables to account for (1) some dynamic aspects like lead time and inventory positions, and (2) the higher degree of uncertainty inherent to reverse logistics. Since this extension introduces nonlinear relationships, the problem is defined as a mixed integer nonlinear program (MINLPmodel). Due to this additional complexity, the MINLPmodel is presented for a single productsinglelevel network. Several examples are solved with a genetic algorithm based on the technique of differential evolution. � 2005 Published by Elsevier Ltd.
Reformulation and Convex Relaxation Techniques for Global Optimization
 4OR
, 2004
"... Many engineering optimization problems can be formulated as nonconvex nonlinear programming problems (NLPs) involving a nonlinear objective function subject to nonlinear constraints. Such problems may exhibit more than one locally optimal point. However, one is often solely or primarily interested i ..."
Abstract

Cited by 11 (9 self)
 Add to MetaCart
(Show Context)
Many engineering optimization problems can be formulated as nonconvex nonlinear programming problems (NLPs) involving a nonlinear objective function subject to nonlinear constraints. Such problems may exhibit more than one locally optimal point. However, one is often solely or primarily interested in determining the globally optimal point. This thesis is concerned with techniques for establishing such global optima using spatial BranchandBound (sBB) algorithms.
Randomization in Discrete Optimization: Annealing Algorithms", in Encyclopedia of Optimization, edited by P.M. Pardalos
, 2001
"... Annealing algorithms have been employed extensively in the past decade to solve myriads of optimization problems. Several intractable problems such as the traveling salesperson problem, graph partitioning, circuit layout, etc. have been solved to get satisfactory results. In this article we survey ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Annealing algorithms have been employed extensively in the past decade to solve myriads of optimization problems. Several intractable problems such as the traveling salesperson problem, graph partitioning, circuit layout, etc. have been solved to get satisfactory results. In this article we survey convergence results known for annealing algorithms. In particular we deal with Simulated Annealing and Nested Annealing. Simulated Annealing (SA) is a randomized heuristic that can be used to solve any combinatorial optimization problem. SA is typically used to produce quasioptimal results. In practice SA has been applied to solve some presumably hard (e.g., NPhard) problems. The level of performance obtained has been promising (Golden and Skiscim 1986, ElGamal and Shperling April 1984, Johnson et al. 1987, Vecchi and Kirkpatrick 1982). The success of this heuristic technique has motivated the study of convergence of this technique. One of the early results in this direction is due to Mitra et al. (Sept. 1986)) who proved that SA converges in the limit to a globally optimal solution with probability 1. Later results proved certain time bounds within which SA is guaranteed to converge with high probability. In this article we provide one such proof (due to Rajasekaran (2000)). Nested Annealing (NA) (Rajasekaran and Reif 1992) is a variation of SA that has been proven to perform better for optimization problems for which the cost function has some special properties (Rajasekaran and Reif 1992). In this article we provide a summary of NA and its convergence properties. 1