Results 1  10
of
24
Partial multicuts in trees
 In Proceedings of the 3rd International Workshop on Approximation and Online Algorithms
, 2005
"... Abstract. Let T = (V, E) be an undirected tree, in which each edge is associated with a nonnegative cost, and let {s1, t1},..., {sk, tk} be a collection of k distinct pairs of vertices. Given a requirement parameter t ≤ k, the partial multicut on a tree problem asks to find a minimum cost set of ed ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
Abstract. Let T = (V, E) be an undirected tree, in which each edge is associated with a nonnegative cost, and let {s1, t1},..., {sk, tk} be a collection of k distinct pairs of vertices. Given a requirement parameter t ≤ k, the partial multicut on a tree problem asks to find a minimum cost set of edges whose removal from T disconnects at least t out of these k pairs. This problem generalizes the wellknown multicut on a tree problem, in which we are required to disconnect all given pairs. The main contribution of this paper is an ( 8 + ɛ)approximation algo3 rithm for partial multicut on a tree, whose run time is strongly polynomial for any fixed ɛ> 0. This result is achieved by introducing problemspecific insight to the general framework of using the Lagrangian relaxation technique in approximation algorithms. Our algorithm utilizes a heuristic for the closely related prizecollecting variant, in which we are not required to disconnect all pairs, but rather incur penalties for failing to do so. We provide a Lagrangian multiplier preserving algorithm for the latter problem, with an approximation factor of 2. Finally, we present a new 2approximation algorithm for multicut on a tree, based on LProunding. 1
Scheduling with Outliers
"... Abstract. In classical scheduling problems, we are given jobs and machines, and have to schedule all the jobs to minimize some objective function. What if each job has a specified profit, and we are no longer required to process all jobs? Instead, we can schedule any subset of jobs whose total profi ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
(Show Context)
Abstract. In classical scheduling problems, we are given jobs and machines, and have to schedule all the jobs to minimize some objective function. What if each job has a specified profit, and we are no longer required to process all jobs? Instead, we can schedule any subset of jobs whose total profit is at least a (hard) target profit requirement, while still trying to approximately minimize the objective function. We refer to this class of problems as scheduling with outliers. This model was initiated by Charikar and Khuller (SODA ’06) for minimum maxresponse time in broadcast scheduling. In this paper, we consider three other wellstudied scheduling objectives: the generalized assignment problem, average weighted completion time, and average flow time, for which LPbased approximation algorithms are provided. Our main results are: – For the minimum average flow time problem on identical machines, we give an LPbased logarithmic approximation algorithm for the unit profits case, and complement this result by presenting a matching integrality gap. – For the average weighted completion time problem on unrelated machines, we give a constantfactor approximation. The algorithm is based on randomized rounding of the timeindexed LP relaxation strengthened by knapsackcover inequalities. – For the generalized assignment problem with outliers, we outline a simple reduction to GAP without outliers to obtain an algorithm whose makespan is within 3 times the optimum makespan, and whose cost is at most (1 + ɛ) times the optimal cost. 1
Combinatorial interpretations of dual fitting and primal fitting
 In Proc. of the First International Workshop on Approximation and Online Algorithms (WAOA ’03), LNCS 2909
, 2003
"... We present two new combinatorial approximation frameworks that are not based on LPduality, or even on linear programming. Instead, they are based on weight manipulation in the spirit of the local ratio technique. We show that the first framework is equivalent to the LP based method of dual fitting ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
We present two new combinatorial approximation frameworks that are not based on LPduality, or even on linear programming. Instead, they are based on weight manipulation in the spirit of the local ratio technique. We show that the first framework is equivalent to the LP based method of dual fitting and that the second framework is equivalent to an LPbased method which we define and call primal fitting. Our equivalence results are similar to the proven equivalence between the (combinatorial) local ratio technique and the (LP based) primaldual schema. We demonstrate the frameworks by presenting alternative analyses to the greedy algorithm for the set cover problem, and to known algorithms for the metric uncapacitated facility location problem. We use our second framework to design an approximation algorithm for the multiple metric uncapacitated facility location problem. We also analyze a 9approximation algorithm for a disk cover problem that arises in the design phase of cellular telephone networks. Finally, we present constant factor approximation algorithms for the prize collecting and partial versions of this disk cover problem.
PROTOTYPE SELECTION FOR INTERPRETABLE CLASSIFICATION
"... This paper is dedicated to the memory of Sam Roweis Prototype methods seek a minimal subset of samples that can serve as a distillation or condensed view of a data set. As the size of modern data sets grows, being able to present a domain specialist with a short list of “representative” samples chos ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
This paper is dedicated to the memory of Sam Roweis Prototype methods seek a minimal subset of samples that can serve as a distillation or condensed view of a data set. As the size of modern data sets grows, being able to present a domain specialist with a short list of “representative” samples chosen from the data set is of increasing interpretative value. While much recent statistical research has been focused on producing sparseinthevariables methods, this paper aims at achieving sparsity in the samples. We discuss a method for selecting prototypes in the classification setting (in which the samples fall into known discrete categories). Our method of focus is derived from three basic properties that we believe a good prototype set should satisfy. This intuition is translated into a set cover optimization problem, which we solve approximately using standard approaches. While prototype selection is usually viewed as purely a means toward building an efficient classifier, in this paper we emphasize the inherent value of having a set of prototypical elements. That said, by using the nearestneighbor rule on the set of prototypes, we can of course discuss our method as a classifier as well. We demonstrate the interpretative value of producing prototypes on the wellknown USPS ZIP code digits data set and show that as a classifier it performs reasonably well. We apply the method to a proteomics data set in which the samples are strings and therefore not naturally embedded in a vector space. Our method is compatible with any dissimilarity measure, making it amenable to situations in which using a nonEuclidean metric is desirable or even necessary. 1. Introduction. Much
Allnorms and allLpnorms approximation algorithms
, 2007
"... ABSTRACT. In many optimization problems, a solution can be viewed as ascribing a “cost ” to each client, and the goal is to optimize some aggregation of the perclient costs. We often optimize some Lpnorm (or some other symmetric convex function or norm) of the vector of costs—though different appl ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
ABSTRACT. In many optimization problems, a solution can be viewed as ascribing a “cost ” to each client, and the goal is to optimize some aggregation of the perclient costs. We often optimize some Lpnorm (or some other symmetric convex function or norm) of the vector of costs—though different applications may suggest different norms to use. Ideally, we could obtain a solution that optimizes several norms simultaneously. In this paper, we examine approximation algorithms that simultaneously perform well on all norms, or on all Lp norms. A natural problem in this framework is the Lp Set Cover problem, which generalizes SET COVER and MINSUM SET COVER. We show that the greedy algorithm simultaneously gives a (p + ln p + O(1))approximation for all p, and show that this approximation ratio is optimal up to constants under reasonable complexitytheoretic assumptions. We additionally show how to use our analysis techniques to give similar results for the more general submodular set cover, and prove some results for the socalled pipelined set cover problem. We then go on to examine approximation algorithms in the “allnorms ” and the “allLpnorms ” frameworks more broadly, and present algorithms and structural results for other problems such as kfacilitylocation, TSP, and average flowtime minimization, extending and unifying previously known results. 1
LAGRANGIAN RELAXATION AND PARTIAL COVER (EXTENDED ABSTRACT)
, 2008
"... Lagrangian relaxation has been used extensively in the design of approximation algorithms. This paper studies its strengths and limitations when applied to Partial Cover. We show that for Partial Cover in general no algorithm that uses Lagrangian relaxation and a Lagrangian Multiplier Preserving ( ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Lagrangian relaxation has been used extensively in the design of approximation algorithms. This paper studies its strengths and limitations when applied to Partial Cover. We show that for Partial Cover in general no algorithm that uses Lagrangian relaxation and a Lagrangian Multiplier Preserving (LMP) αapproximation as a black box can yield an approximation factor better than 4 3 α. This matches the upper bound given by Könemann et al. (ESA 2006, pages 468–479). Faced with this limitation we study a specific, yet broad class of covering problems: Partial Totally Balanced Cover. By carefully analyzing the inner workings of the LMP algorithm we are able to give an almost tight characterization of the integrality gap of the standard linear relaxation of the problem. As a consequence we obtain improved approximations for the Partial version of Multicut and Path Hitting on Trees, Rectangle Stabbing, and Set Cover with ρBlocks.
On Lagrangian relaxation and subset selection problems
 In Proc. 6th Workshop on Approximation and Online Algorithms
, 2009
"... We prove a general result demonstrating the power of Lagrangian relaxation in solving constrained maximization problems with arbitrary objective functions. This yields a unified approach for solving a wide class of subset selection problems with linear constraints. Given a problem in this class and ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
We prove a general result demonstrating the power of Lagrangian relaxation in solving constrained maximization problems with arbitrary objective functions. This yields a unified approach for solving a wide class of subset selection problems with linear constraints. Given a problem in this class and some small ε ∈ (0, 1), we show that if there exists a ρapproximation algorithm for the Lagrangian relaxation of the problem, for some ρ ∈ (0, 1), then our technique ρ ρ+1 achieves a ratio of −ε to the optimal, and this ratio is tight. The number of calls to the ρapproximation algorithm, used by our algorithms, is linear in the input size and in log(1/ε) for inputs with cardinality constraint, and polynomial in the input size and in log(1/ε) for inputs with arbitrary linear constraint. Using the technique we obtain approximation algorithms for natural variants of classic subset selection problems, including realtime scheduling, the maximum generalized assignment problem (GAP) and maximum weight independent set. 1
Stochastic Combinatorial Optimization under Probabilistic Constraints
, 809
"... In this paper, we present approximation algorithms for combinatorial optimization problems under probabilistic constraints. Specifically, we focus on stochastic variants of two important combinatorial optimization problems: the kcenter problem and the set cover problem, with uncertainty characteriz ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
In this paper, we present approximation algorithms for combinatorial optimization problems under probabilistic constraints. Specifically, we focus on stochastic variants of two important combinatorial optimization problems: the kcenter problem and the set cover problem, with uncertainty characterized by a probability distribution over set of points or elements to be covered. We consider these problems under adaptive and nonadaptive settings, and present efficient approximation algorithms for the case when underlying distribution is a product distribution. In contrast to the expected cost model prevalent in stochastic optimization literature, our problem definitions support restrictions on the probability distributions of the total costs, via incorporating constraints that bound the probability with which the incurred costs may exceed a given threshold. 1
An Extension of the NemhauserTrotter Theorem to Generalized Vertex Cover with Applications
, 2010
"... The Nemhauser–Trotter theorem provides an algorithm which is frequently used as a subroutine in approximation algorithms for the classical Vertex Cover problem. In this paper we present an extension of this theorem so it fits a more general variant of Vertex Cover, namely, the Generalized Vertex Cov ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
The Nemhauser–Trotter theorem provides an algorithm which is frequently used as a subroutine in approximation algorithms for the classical Vertex Cover problem. In this paper we present an extension of this theorem so it fits a more general variant of Vertex Cover, namely, the Generalized Vertex Cover problem, where edges are allowed not to be covered at a certain predetermined penalty. We show that many applications of the original Nemhauser–Trotter theorem can be applied using our extension to Generalized Vertex Cover. These applications include a (2−2/d)approximation algorithm for graphs of bounded degree d, a polynomialtime approximation scheme (PTAS) for planar graphs, a (2 − lg lg n/2lgn)approximation algorithm for general graphs, and a 2k kernel for the parameterized Generalized Vertex Cover problem.