Results 1 -
8 of
8
Metaheuristics in combinatorial optimization: Overview and conceptual comparison
- ACM COMPUTING SURVEYS
, 2003
"... The field of metaheuristics for the application to combinatorial optimization problems is a rapidly growing field of research. This is due to the importance of combinatorial optimization problems for the scientific as well as the industrial world. We give a survey of the nowadays most important meta ..."
Abstract
-
Cited by 294 (16 self)
- Add to MetaCart
The field of metaheuristics for the application to combinatorial optimization problems is a rapidly growing field of research. This is due to the importance of combinatorial optimization problems for the scientific as well as the industrial world. We give a survey of the nowadays most important metaheuristics from a conceptual point of view. We outline the different components and concepts that are used in the different metaheuristics in order to analyze their similarities and differences. Two very important concepts in metaheuristics are intensification and diversification. These are the two forces that largely determine the behaviour of a metaheuristic. They are in some way contrary but also complementary to each other. We introduce a framework, that we call the I&D frame, in order to put different intensification and diversification components into relation with each other. Outlining the advantages and disadvantages of different metaheuristic approaches we conclude by pointing out the importance of hybridization of metaheuristics as well as the integration of metaheuristics and other methods for optimization.
Experiments in parallel constraint-based local search
- Evolutionary Computation in Combinatorial Optimization - 11th European Conference, EvoCOP 2011
"... Abstract. We present a parallel implementation of a constraint-based local search algorithm and investigate its performance results on hard-ware with several hundreds of processors. We choose as basic constraint solving algorithm for these experiments the ”adaptive search ” method, an efficient sequ ..."
Abstract
-
Cited by 13 (11 self)
- Add to MetaCart
(Show Context)
Abstract. We present a parallel implementation of a constraint-based local search algorithm and investigate its performance results on hard-ware with several hundreds of processors. We choose as basic constraint solving algorithm for these experiments the ”adaptive search ” method, an efficient sequential local search method for Constraint Satisfaction Problems. The implemented algorithm is a parallel version of adaptive search in a multiple independent-walk manner, that is, each process is an independent search engine and there is no communication between the simultaneous computations. Preliminary performance evaluation on a variety of classical CSPs benchmarks shows that speedups are very good for a few tens of processors, and good up to a few hundreds of processors. 1
Parallel constraint-based local search on the cell/be multicore architecture
- In proceedings of IDC2010, Intelligent Distributed Computing IV
, 2010
"... Abstract. In this study, we started to investigate how the Partitioned Global Address Space (PGAS) programming language X10 would suit the implementation of a Constraint-Based Local Search solver. We wanted to code in this language because we expect to gain from its ease of use and independence from ..."
Abstract
-
Cited by 7 (5 self)
- Add to MetaCart
(Show Context)
Abstract. In this study, we started to investigate how the Partitioned Global Address Space (PGAS) programming language X10 would suit the implementation of a Constraint-Based Local Search solver. We wanted to code in this language because we expect to gain from its ease of use and independence from specific parallel architectures. We present our im-plementation strategy, and quest for different sources of parallelism. We discuss the algorithms, their implementations and present a performance evaluation on a representative set of benchmarks. 1
Codognet, P.: Prediction of Parallel Speed-ups for Las Vegas Algorithms
- Proceedings of ICPP-2013, 42nd International Conference on Parallel Processing, IEEE
, 2013
"... hal-00870979, version 1-Abstract—We propose a probabilistic model for the parallel execution of Las Vegas algorithms, i.e. randomized algorithms whose runtime might vary from one execution to another, even with the same input. This model aims at predicting the parallel performances (i.e. speedups) b ..."
Abstract
-
Cited by 6 (1 self)
- Add to MetaCart
hal-00870979, version 1-Abstract—We propose a probabilistic model for the parallel execution of Las Vegas algorithms, i.e. randomized algorithms whose runtime might vary from one execution to another, even with the same input. This model aims at predicting the parallel performances (i.e. speedups) by analysis the runtime distribution of the sequential runs of the algorithm. Then, we study in practice the case of a particular Las Vegas algorithm for combinatorial optimization on three classical problems, and compare the model with an actual parallel implementation up to 256 cores. We show that the prediction can be accurate, matching the actual speedups very well up to 100 parallel cores and then with a deviation of about 20 % up to 256 cores. I.
Performance analysis of parallel constraint-based local search
- in PPoPP 2012, 17th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming
"... We present a parallel implementation of a constraint-based local search algorithm and investigate its performance results for hard combinatorial optimization problems on two different platforms up to several hundreds of cores. On a variety of classical CSPs bench-marks, speedups are very good for a ..."
Abstract
-
Cited by 4 (3 self)
- Add to MetaCart
(Show Context)
We present a parallel implementation of a constraint-based local search algorithm and investigate its performance results for hard combinatorial optimization problems on two different platforms up to several hundreds of cores. On a variety of classical CSPs bench-marks, speedups are very good for a few tens of cores, and good up to a hundred cores. More challenging problems derived from real-life applications (Costas array) shows even better speedups, nearly optimal up to 256 cores. Categories and Subject Descriptors G [1.6]: Constrained op-
Noname manuscript No. (will be inserted by the editor) Large-Scale Parallelism for Constraint-Based Local Search: The Costas Array Case Study
"... Abstract We present the parallel implementation of a constraint-based Local Search algorithm and investigate its performance on several hardware plat-forms with several hundreds or thousands of cores. We chose as the basis for these experiments the Adaptive Search method, an efficient sequential Loc ..."
Abstract
- Add to MetaCart
Abstract We present the parallel implementation of a constraint-based Local Search algorithm and investigate its performance on several hardware plat-forms with several hundreds or thousands of cores. We chose as the basis for these experiments the Adaptive Search method, an efficient sequential Local Search method for Constraint Satisfaction Problems (CSP). After preliminary experiments on some CSPLib benchmarks, we detail the modeling and solving of a hard combinatorial problem related to radar and sonar applications: the Costas Array Problem. Performance evaluation on some classical CSP bench-marks shows that speedups are very good for a few tens of cores, and good up to a few hundreds of cores. However for a hard combinatorial search problem such as the Costas Array Problem, performance evaluation of the sequential version shows results outperforming previous Local Search implementations, while the parallel version shows nearly linear speedups up to 8,192 cores. The proposed parallel scheme is simple and based on independent multi-walks with no communication between processes during search. We also investigated a cooperative multi-walk scheme where processes share simple information, but this scheme does not seem to improve performance.
CONCURRENCY AND COMPUTATION: PRACTICE AND EXPERIENCE Concurrency Computat.: Pract. Exper. (2011) Published online in Wiley Online Library (wileyonlinelibrary.com). DOI: 10.1002/cpe.1855 SPECIAL ISSUE PAPER Targeting the Cell Broadband Engine for constrain
"... We investigated the use of the Cell Broadband Engine (Cell/BE) for constraint-based local search and combinatorial optimization applications. We presented a parallel version of a constraint-based local search algorithm that was chosen because it fits very well the Cell/BE architecture because it req ..."
Abstract
- Add to MetaCart
(Show Context)
We investigated the use of the Cell Broadband Engine (Cell/BE) for constraint-based local search and combinatorial optimization applications. We presented a parallel version of a constraint-based local search algorithm that was chosen because it fits very well the Cell/BE architecture because it requires neither shared memory nor communication among processors. The performance study on several large optimization bench-marks shows mostly linear time speedups, sometimes even super linear. These experiments were carried out on a dual-Cell IBM (Armonk, NY, USA) blade with 16 processors. Besides getting speedups, the execution times exhibit a much smaller variance that benefits applications where a timely reply is critical. Copyright
Parallel local search for the Costas Array Problem
"... Abstract—The Costas Array Problem is a highly combina-torial problem linked to radar applications. We present in this paper its detailed modeling and solving by Adaptive Search, a constraint-based local search method. Experiments have been done on both sequential and parallel hardware up to several ..."
Abstract
- Add to MetaCart
Abstract—The Costas Array Problem is a highly combina-torial problem linked to radar applications. We present in this paper its detailed modeling and solving by Adaptive Search, a constraint-based local search method. Experiments have been done on both sequential and parallel hardware up to several hundreds of cores. Performance evaluation of the sequential version shows results outperforming previous implementations, while the parallel version shows nearly linear speedups up to 8,192 cores. I.