Results 11  20
of
647
GRASP and path relinking for 2layer straight line crossing minimization
 INFORMS Journal on Computing
, 1999
"... ABSTRACT — In this paper, we develop a greedy randomized adaptive search procedure (GRASP) for the problem of minimizing straightline crossings in a 2layer graph. The procedure is fast and is particularly appealing when dealing with lowdensity graphs. When a modest increase in computational time ..."
Abstract

Cited by 100 (19 self)
 Add to MetaCart
(Show Context)
ABSTRACT — In this paper, we develop a greedy randomized adaptive search procedure (GRASP) for the problem of minimizing straightline crossings in a 2layer graph. The procedure is fast and is particularly appealing when dealing with lowdensity graphs. When a modest increase in computational time is allowed, the procedure may be coupled with a path relinking strategy to search for improved outcomes. Although the principles of path relinking have appeared in the tabu search literature, this search strategy has not been fully implemented and tested. We perform extensive computational experiments with more than 3,000 graph instances to first study the effect of changes in critical search parameters and then to compare the efficiency of alternative solution procedures. Our results indicate that graph density is a major influential factor on the performance of a solution procedure. Laguna and Martí / 2 The problem of minimizing straightline crossings in layered graphs has been the subject of study for at least 17 years, beginning with the Relative Degree Algorithm introduced by Carpano [2]. The problem consists of aligning the two shores V1 and V2 of a bipartite graph G = (V1, V2, E) on two parallel straight lines (layers) such that the number of crossing between the edges in E is minimized
Squeaky wheel optimization
 Journal of Artificial Intelligence Research
, 1999
"... Abstract We describe a general approach to optimization which we term "Squeaky Wheel" Optimization (swo). swo, a greedy algorithm is used to construct a solution which is then analyzed to find the trouble spots, i.e., those elements, that, if improved, axe likely to improve the objective ..."
Abstract

Cited by 83 (3 self)
 Add to MetaCart
Abstract We describe a general approach to optimization which we term "Squeaky Wheel" Optimization (swo). swo, a greedy algorithm is used to construct a solution which is then analyzed to find the trouble spots, i.e., those elements, that, if improved, axe likely to improve the objective function score. That analysis is used to generate new priorities that determine the order in which the greedy algorithm constructs the next solution. This Construct/Analyze/Prioritize cycle continues until some limit is reached, or an acceptable solution is found. SWO can be viewed as operating on two search spaces: solutions and prioritizations. Successive solutions are only indirectly related, via the reprioritization that results from analyzing the prior solution. Similarly, successive prioritizations are generated by constructing and analyzing solutions. This "coupled search" has some interesting properties, which we discuss. We report encouraging experimental results on two domains, scheduling problems that arise in fiberoptic cable manufacturing, and graph coloring problems. The fact that these domains are very different supports our claim that swo is a general technique for optimization. Overview We describe a general approach to optimization which we term "Squeaky Wheel" Optimization (SWO) On each iteration, the analyzer determines which elements of the problem are causing the most trouble in the current solution, and the prioritizer ensures that the constructor gives more attention to those elements on the next iteration. ("The squeaky wheel gets the grease.") The construction, analysis and prioritization are all in terms of the elements that define a problem domain. In a scheduling domain, for example, those elements might be tasks. In graph coloring, those elements might be the nodes to be colored. The three main components of swo are: Constructor. Given a sequence of problem elements, the constructor generates a solution using a greedy algorithm, with no backtracking. The sequence determines the order in which decisions are made, and can be thought of as a "strategy" or "recipe" for constructing a new solution. (This "solution" may violate hard constraints.) Analyzer. The analyzer assigns a numeric "blame" factor to the problem elements that contribute to flaws in the current solution. For example, if minimizing lateness in a scheduling problem is one of the objectives, then blame would be assigned to late tasks. A key principle behind swo is that solutions can reveal problem structure. By analyzing a solution, we can often identify elements of that solution that work well, and elements that work poorly. This information about problem structure is local, in that it may only apply to the part of the search space currently under examination, but may be useful in determining what direction the search should go next. Prioritizer. The prioritizer uses the blame factors assigned by the analyzer to modify the previous sequence of problem elements. Elements that received blame are moved toward the front of the sequence. The higher the blame, the further the element is moved. The priority sequence plays a key role in swo. As a difficult problem element moves forward in the sequence it is handled sooner by the constructor. It also From: AAAI98 Proceedings.
Mobile Robot Localisation and Mapping in Extensive Outdoor Environments
, 2002
"... This thesis addresses the issues of scale for practical implementations of simultaneous localisation and mapping (SLAM) in extensive outdoor environments. Building an incremental map while also using it for localisation is of prime importance for mobile robot navigation but, until recently, has bee ..."
Abstract

Cited by 70 (9 self)
 Add to MetaCart
This thesis addresses the issues of scale for practical implementations of simultaneous localisation and mapping (SLAM) in extensive outdoor environments. Building an incremental map while also using it for localisation is of prime importance for mobile robot navigation but, until recently, has been confined to smallscale, mostly indoor, environments. The critical problems for largescale implementations are as follows. First, data association finding correspondences between map landmarks and robot sensor measurementsbecomes difficult in complex, cluttered environments, especially if the robot location is uncertain. Second, the information required to maintain a consistent map using traditional methods imposes a prohibitive computational burden as the map increases in size. And third, the mathematics for SLAM relies on assumptions of small errors and nearlinearity, and these become invalid for larger maps.
Planning UMTS Base Station Location: Optimization Models with Power Control and Algorithms
 IEEE Transactions on Wireless Communications
, 2003
"... Classical coverage models, adopted for secondgeneration cellular systems, are not suited for planning universal mobile telecommunication system (UMTS) base station (BS) location because they are only based on signal predictions and do not consider the traffic distribution, the signal quality requir ..."
Abstract

Cited by 66 (12 self)
 Add to MetaCart
(Show Context)
Classical coverage models, adopted for secondgeneration cellular systems, are not suited for planning universal mobile telecommunication system (UMTS) base station (BS) location because they are only based on signal predictions and do not consider the traffic distribution, the signal quality requirements, and the power control (PC) mechanism. In this paper, we propose discrete optimization models and algorithms aimed at supporting the decisions in the process of planning where to locate new BSs. These models consider the signaltointerference ratio as quality measure and capture at different levels of detail the signal quality requirements and the specific PC mechanism of the wideband CDMA air interface. Given that these UMTS BS location models are nonpolynomial (NP)hard, we propose two randomized greedy procedures and a tabu search algorithm for the uplink (mobile to BS) direction which is the most stringent one from the traffic point of view in the presence of balanced connections such as voice calls. The different models, which take into account installation costs, signal quality and traffic coverage, and the corresponding algorithms, are compared on families of small to largesize instances generated by using classical propagation models.
A Discrete LagrangianBased GlobalSearch Method for Solving Satisfiability Problems
 Journal of Global Optimization
, 1998
"... Satisfiability is a class of NPcomplete problems that model a wide range of realworld applications. These problems are difficult to solve because they have many local minima in their search space, often trapping greedy search methods that utilize some form of descent. In this paper, we propose a n ..."
Abstract

Cited by 64 (6 self)
 Add to MetaCart
Satisfiability is a class of NPcomplete problems that model a wide range of realworld applications. These problems are difficult to solve because they have many local minima in their search space, often trapping greedy search methods that utilize some form of descent. In this paper, we propose a new discrete Lagrangemultiplierbased globalsearch method for solving satisfiability problems. We derive new approaches for applying Lagrangian methods in discrete space, show that equilibrium is reached when a feasible assignment to the original problem is found, and present heuristic algorithms to look for equilibrium points. Instead of restarting from a new starting point when a search reaches a local trap, the Lagrange multipliers in our method provide a force to lead the search out of a local minimum and move it in the direction provided by the Lagrange multipliers. One of the major advantages of our method is that it has very few algorithmic parameters to be tuned by users, and the se...
Scheduling scientific workflow applications with deadline and budget constraints using genetic algorithms.
 Sci. Program.,
, 2006
"... Abstract. Grid technologies have progressed towards a serviceoriented paradigm that enables a new way of service provisioning based on utility computing models, which are capable of supporting diverse computing services. It facilitates scientific applications to take advantage of computing resourc ..."
Abstract

Cited by 61 (6 self)
 Add to MetaCart
(Show Context)
Abstract. Grid technologies have progressed towards a serviceoriented paradigm that enables a new way of service provisioning based on utility computing models, which are capable of supporting diverse computing services. It facilitates scientific applications to take advantage of computing resources distributed world wide to enhance the capability and performance. Many scientific applications in areas such as bioinformatics and astronomy require workflow processing in which tasks are executed based on their control or data dependencies. Scheduling such interdependent tasks on utility Grid environments need to consider users' QoS requirements. In this paper, we present a genetic algorithm approach to address scheduling optimization problems in workflow applications, based on two QoS constraints, deadline and budget.
Probability distribution of solution time in GRASP: An experimental investigation
 J HEURISTIC
, 2000
"... A GRASP (greedy randomized adaptive search procedure) is a multistart metaheuristic for combinatorial optimization. We study the probability distributions of solution time to a suboptimal target value in five GRASPs that have appeared in the literature and for which source code is available. The ..."
Abstract

Cited by 60 (29 self)
 Add to MetaCart
A GRASP (greedy randomized adaptive search procedure) is a multistart metaheuristic for combinatorial optimization. We study the probability distributions of solution time to a suboptimal target value in five GRASPs that have appeared in the literature and for which source code is available. The distributions are estimated by running 12,000 independent runs of the heuristic. Standard methodology for graphical analysis is used to compare the empirical and theoretical distributions and estimate the parameters of the distributions. We conclude that the solution time to a suboptimal target value fits a twoparameter exponential distribution. Hence, it is possible to approximately achieve linear speedup by implementing GRASP in parallel.
Guided local search and its application to the traveling salesman problem
, 1998
"... The Traveling Salesman Problem (TSP) is one of the most famous problems in combinatorial optimization. In this paper, we are going to examine how the techniques of Guided Local Search (GLS) and Fast Local Search (FLS) can be applied to the problem. Guided Local Search sits on top of local search heu ..."
Abstract

Cited by 59 (16 self)
 Add to MetaCart
The Traveling Salesman Problem (TSP) is one of the most famous problems in combinatorial optimization. In this paper, we are going to examine how the techniques of Guided Local Search (GLS) and Fast Local Search (FLS) can be applied to the problem. Guided Local Search sits on top of local search heuristics and has as a main aim to guide these procedures in exploring efficiently and effectively the vast search spaces of combinatorial optimization problems. Guided Local Search can be combined with the neighborhood reduction scheme of Fast Local Search which significantly speeds up the operations of the algorithm. The combination of GLS and FLS with TSP local search heuristics of different efficiency and effectiveness is studied in an effort to determine the dependence of GLS on the underlying local search heuristic used. Comparisons are made with some of the best TSP heuristic algorithms and general optimization techniques which demonstrate the advantages of GLS over alternative heuristic approaches suggested for the problem.
Feedback set problems
 HANDBOOK OF COMBINATORIAL OPTIMIZATION
, 1999
"... ABSTRACT. This paper is a short survey of feedback set problems. It will be published in ..."
Abstract

Cited by 56 (1 self)
 Add to MetaCart
(Show Context)
ABSTRACT. This paper is a short survey of feedback set problems. It will be published in
An Experimental Evaluation of a Scatter Search for the Linear Ordering Problem
, 1999
"... Scatter search is a populationbased method that has recently been shown to yield promising outcomes for solving combinatorial and nonlinear optimization problems. Based on formulations originally proposed in the 1960s for combining decision rules and problem constraints, such as the surrogate const ..."
Abstract

Cited by 56 (19 self)
 Add to MetaCart
(Show Context)
Scatter search is a populationbased method that has recently been shown to yield promising outcomes for solving combinatorial and nonlinear optimization problems. Based on formulations originally proposed in the 1960s for combining decision rules and problem constraints, such as the surrogate constraint method, scatter search uses strategies for combining solution vectors that have proved effective in a variety of problem settings. In this paper, we present a scatter search implementation designed to find high quality solutions for the NPhard linear ordering problem, which has a significant number of applications in practice. The LOP, for example, is equivalent to the socalled triangulation problem for inputoutput tables in economics. Our implementation goes beyond a simple exercise on applying scatter search, by incorporating innovative mechanisms to combine solutions and to create a balance between quality and diversification in the reference set. We also use a tracking process that generates solution statistics disclosing the nature of combinations and the ranks of antecedent solutions that produced the best final solutions. Our extensive computational experiments with more than 300 instances establishes the effectiveness of our procedure in relation to those previously identified to be best.