Results 1 
7 of
7
Pricing on Paths: A PTAS for the Highway Problem
"... In the highway problem, we are given an nedge line graph (the highway), and a set of paths (the drivers), each one with its own budget. For a given assignment of edge weights (the tolls), the highway owner collects from each driver the weight of the associated path, when it does not exceed the budg ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
(Show Context)
In the highway problem, we are given an nedge line graph (the highway), and a set of paths (the drivers), each one with its own budget. For a given assignment of edge weights (the tolls), the highway owner collects from each driver the weight of the associated path, when it does not exceed the budget of the driver, and zero otherwise. The goal is choosing weights so as to maximize the profit. A lot of research has been devoted to this apparently simple problem. The highway problem was shown to be strongly NPhard only recently [Elbassioni,Raman,Ray,Sitters’09]. The bestknown approximation is O(log n / log log n) [Gamzu,Segev’10], which improves on the previousbest O(log n) approximation [Balcan,Blum’06]. Better approximations are known for a number of special cases. Finding a constant (or better!) approximation algorithm for the general case is a challenging open problem. In this paper we present a PTAS for the highway problem, hence closing the complexity status of the problem. Our result is based on a novel randomized dissection approach, which has some points in common with Arora’s quadtree dissection for Euclidean network design [Arora’98]. The basic idea is enclosing the highway in a bounding path, such that both the size of the bounding path and the position of the highway in it are random variables. Then we consider a recursive O(1)ary dissection of the bounding path, in subpaths of uniform optimal weight. Since the optimal weights are unknown, we construct the dissection in a bottomup fashion via dynamic programming, while computing the approximate solution at the same time. Our algorithm can be easily derandomized. The same basic approach provides PTASs also for two generalizations of the problem: the tollbooth problem with a constant number of leaves and the maximumfeasibility subsystem problem on interval matrices. In both cases the previous best approximation factors are polylogarithmic [Gamzu,Segev’10,Elbassioni,Raman,Ray,Sitters’09].
Reducing the optimum: superexponential in opt time, Fixed parameter inapproximability for clique and setcover
, 2013
"... ..."
Reducing the optimum: Super exponential time fixed parameter inapproximability for clique and setcover
, 2013
"... ..."
The Foundation of Fixed Parameter Inapproximability
, 2013
"... Given an instance I of a minimization problem with optimum opt, fixed parameter ρ(k) inapproximability is to find a k ≥ opt and prove that it is not possible to compute a solution of value ρ(k) · k usually under the eth. In this paper we are interested only in k being the optimum value of some inst ..."
Abstract
 Add to MetaCart
Given an instance I of a minimization problem with optimum opt, fixed parameter ρ(k) inapproximability is to find a k ≥ opt and prove that it is not possible to compute a solution of value ρ(k) · k usually under the eth. In this paper we are interested only in k being the optimum value of some instance. Our question is: What properties make a good Fixed Parameter Inapproximability proof? We claim that Fixed Parameter Inapproximability should be done, whenever possible, with parameter opt. We show simple examples so that k is far from opt for which fixed parameter inapproximability in k is not possible, while fixed parameter tractability in k is trivial to prove. However, these results are meaningless. To reduce with parameter opt, we need opt to be known. The way to achieve that is make opt the value of a yes instance in gap reduction. An (r, t)FPThardness in opt for two functions r, t, is showing that the problem admits no r(opt) approximation that runs in time t(opt)nO(1) (for maximization problems any solution has to be super constant). Our main claim is that the art of Fixed Parameter inapproximability is the art of
Reducing the optimum value: FPT . . .
, 2013
"... Fixed parameter ρ(k) inapproximability in minimization problems, is given some instance I of a problem with optimum opt, find some k ≥ opt, prove that it is not possible to compute a solution of value ρ(k) · k, usually, under the Exponential Time Hypothesis (eth). If opt is known, inapproximabili ..."
Abstract
 Add to MetaCart
Fixed parameter ρ(k) inapproximability in minimization problems, is given some instance I of a problem with optimum opt, find some k ≥ opt, prove that it is not possible to compute a solution of value ρ(k) · k, usually, under the Exponential Time Hypothesis (eth). If opt is known, inapproximability in terms of opt implies inapproximability in terms of k. An (r, t)fpthardness (in opt) for two functions r, t, is showing that the problem admits no r(opt) approximation that runs in time t(opt)nO(1) (for maximization problems any solution has to be super constant). In this paper we are only interested in t(opt) that is super exponential in opt. Fellows [9] conjectured that setcover and clique are (r, t)fpthard for any pair of nondecreasing functions r, t and input parameter k. We give the first inapproximability for these problems that runs in time super exponential in opt. Our paper is also the first to introduce systematic techniques to reduce the value of the optimum. These technique work for 3 totally different problems. We prove that under eth [14] and the projection game conjecture [19], setcover