Results 1  10
of
14
Experiments with Massively Parallel Constraint Solving
 TWENTYFIRST INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI'09)
, 2009
"... The computing industry is currently facing a major architectural shift. Extra computing power is not coming anymore from higher processor frequencies, but from a growing number of computing cores and processors. For AI, and constraint solving in particular, this raises the question of how to scale c ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
The computing industry is currently facing a major architectural shift. Extra computing power is not coming anymore from higher processor frequencies, but from a growing number of computing cores and processors. For AI, and constraint solving in particular, this raises the question of how to scale current solving techniques to massively parallel architectures. While prior work focusses mostly on small scale parallel constraint solving, we conduct the first study on scalability of constraint solving on 100 processors and beyond in this paper. We propose techniques that are simple to apply and show empirically that they scale surprisingly well. These techniques establish a performance baseline for parallel constraint solving technologies against which more sophisticated parallel algorithms need to compete in the future. 1 Context and Goals of the Paper A major achievement of the digital hardware industry in the second half of the 20th century was to engineer processors whose frequency doubled every 18 months or so. It has now been clear for a few years that this period of ”free lunch”, as put by [Sutter, 2005], is behind us. The forecast of the industry is still that the available computational power will keep increasing exponentially, but the increase will from now on be in terms of number of processors available, not in terms of frequency per unit. This shift from ever higher frequencies to ever more processors1 is perhaps the single most significant development in the computing industry today. Besides the highperformance computing facilities readily accessible by many AI practitioners in academia and the industry, novel architectures provide largescale parallelism: • MultiCore processors are now the norm. Chip makers are predicting that the trend will from now on intensify from just a few cores to many [Held et al., 2006], a shift which raises significant challenges for software development
Embarrassingly parallel search
 Principles and Practice of Constraint Programming
, 2013
"... Abstract. We propose the Embarrassingly Parallel Search, a simple and efficient method for solving constraint programming problems in parallel. We split the initial problem into a huge number of independent subproblems and solve them with available workers, for instance cores of machines. The decomp ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
(Show Context)
Abstract. We propose the Embarrassingly Parallel Search, a simple and efficient method for solving constraint programming problems in parallel. We split the initial problem into a huge number of independent subproblems and solve them with available workers, for instance cores of machines. The decomposition into subproblems is computed by selecting a subset of variables and by enumerating the combinations of values of these variables that are not detected inconsistent by the propagation mechanism of a CP Solver. The experiments on satisfaction problems and optimization problems suggest that generating between thirty and one hundred subproblems per worker leads to a good scalability. We show that our method is quite competitive with the work stealing approach and able to solve some classical problems at the maximum capacity of the multicore machines. Thanks to it, a user can parallelize the resolution of its problem without modifying the solver or writing any parallel source code and can easily replay the resolution of a problem. 1
Selfsplitting of workload in parallel computation
"... Parallel computation requires splitting a job among a set of processing units called workers. The computation is generally performed by a set of one or more master workers that split the workload into chunks and distribute them to a set of slave workers. In this setting, communication among worker ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Parallel computation requires splitting a job among a set of processing units called workers. The computation is generally performed by a set of one or more master workers that split the workload into chunks and distribute them to a set of slave workers. In this setting, communication among workers can be problematic and/or time consuming. Tree search algorithms are particularly suited for being applied in a parallel fashion, as different nodes can be processed by different workers in parallel. In this paper we propose a simple mechanism to convert a sequential treesearch code into a parallel one. In the new paradigm, called SelfSplit, each worker is able to autonomously determine, without any communication with the other workers, the job parts it has to process. Computational results are reported, showing that SelfSplit can achieve an almost linear speedup for hard Constraint Programming applications, even when 64 workers are considered.
The Shape of the Search Tree for the Maximum Clique Problem, and the Implications for Parallel Branch and Bound
, 2014
"... Finding a maximum clique in a given graph is one of the fundamental NPhard problems. We compare two multicore threaded parallel adaptations of a stateoftheart branch and bound algorithm for the maximum clique problem, and provide a novel explanation as to why they are successful. We show that ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Finding a maximum clique in a given graph is one of the fundamental NPhard problems. We compare two multicore threaded parallel adaptations of a stateoftheart branch and bound algorithm for the maximum clique problem, and provide a novel explanation as to why they are successful. We show that load balance is sometimes a problem, but that the interaction of parallel search order and the most likely location of solutions within the search space is often the dominating consideration. We use this explanation to propose a new lowoverhead, scalable work splitting mechanism. Our approach uses explicit early diversity to avoid strong commitment to the weakest heuristic advice, and late resplitting for balance. 1
Distributed Work Stealing for Constraint Solving (Extended Abstract)
"... Abstract. With the dissemination of affordable parallel and distributed hardware, parallel and distributed constraint solving has lately been the focus of some attention. To effectually apply the power of distributed computational systems, there must be an effective sharing of the work involved in t ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract. With the dissemination of affordable parallel and distributed hardware, parallel and distributed constraint solving has lately been the focus of some attention. To effectually apply the power of distributed computational systems, there must be an effective sharing of the work involved in the search for a solution to a Constraint Satisfaction Problem (CSP) between all the participating agents, and it must happen dynamically, since it is hard to predict the effort associated with the exploration of some part of the search space. We describe and provide an experimental assessment of an implementation of a work stealingbased approach to parallel CSP solving in a distributed setting. 1
The Organizing Committee.
, 2009
"... The areas of AI planning and scheduling have seen important advances thanks to application of constraint satisfaction techniques. Currently, many important realworld problems require efficient constraint handling for planning, scheduling and resource allocation to competing goal activities over tim ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The areas of AI planning and scheduling have seen important advances thanks to application of constraint satisfaction techniques. Currently, many important realworld problems require efficient constraint handling for planning, scheduling and resource allocation to competing goal activities over time in the presence of complex statedependent constraints. Therefore, solutions to these problems must integrate resource allocation and plan synthesis capabilities. Basically, we need to manage complex problems where planning, scheduling and constraint satisfaction must be interrelated, which entail a great potential of application. The workshop aims at providing a forum for meeting and exchanging ideas and novel works in the field of AI planning, scheduling, constraint satisfaction techniques, and many relationships that exist among them. In fact, most of the received works are based on combined approaches of constraint satisfaction for planning, scheduling and mixing planning and scheduling. The workshop was held in September, 2009 in Thessaloniki, Greece during the International Conference on Automated Planning & Scheduling (ICAPS'09). All the submissions were reviewed by at least two anonymous referees from the program committee, who decided to accept 7 papers for oral presentation in the workshop. The papers
Improvement of the Embarrassingly Parallel Search for Data Centers
"... Abstract. We propose an adaptation of the Embarrassingly Parallel Search (EPS) method for data centers. EPS is a simple but efficient method for parallel solving of CSPs. EPS decomposes the problem in many distinct subproblems which are then solved independently by workers. EPS performed well on mul ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We propose an adaptation of the Embarrassingly Parallel Search (EPS) method for data centers. EPS is a simple but efficient method for parallel solving of CSPs. EPS decomposes the problem in many distinct subproblems which are then solved independently by workers. EPS performed well on multicores machines (40), but some issues arise when using more cores in a datacenter. Here, we identify the decomposition as the cause of the degradation and propose a parallel decomposition to address this issue. Thanks to it, EPS gives almost linear speedup and outperforms work stealing by orders of magnitude using the Gecode solver. 1
Using Cloud Computing for Solving Constraint Programming Problems
"... Abstract. We propose to use cloud computing for solving constraint programing problems in parallel. We used the Embarrassingly Parallel Search (EPS) method in conjunction with Microsoft Azure, the cloud computing platform and infrastructure, created by Microsoft. EPS decomposes the problem in many ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. We propose to use cloud computing for solving constraint programing problems in parallel. We used the Embarrassingly Parallel Search (EPS) method in conjunction with Microsoft Azure, the cloud computing platform and infrastructure, created by Microsoft. EPS decomposes the problem in many distinct subproblems which are then solved independently by workers. EPS has three advantages: it is an efficient method, it is simple to deploy and it involves almost no communication between workers. Thus, EPS is particularly wellsuited method for being used on cloud infrastructure. Experimental results show ratio of gain equivalent to those obtained for a parallel machine or a data center showing the strength of EPS while using in conjunction with a cloud infrastructure. We also compute the number of cores in a cloud infrastructure requires to improve the resolution by a factor of k and we discuss about the price to pay for solving a given problem in a certain amount of time. 1
Solving
"... scheduling problems using parallel messagepassing based constraint programming ..."
Abstract
 Add to MetaCart
scheduling problems using parallel messagepassing based constraint programming